Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Cross-modal perception of actively explored objects

MPG-Autoren
/persons/resource/persons84426

Newell,  F
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83906

Ernst,  M
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Newell, F., Bülthoff, H., & Ernst, M. (2003). Cross-modal perception of actively explored objects. In EuroHaptics International Conference 2003 (pp. 291-299). Dublin, Ireland: Trinity College Dublin.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-DC1C-6
Zusammenfassung
Many objects in our world can be picked up and freely manipulated, thus allowing information about an object to be available to both the visual and haptic systems. However, we understand very little about how object information is shared across the modalities. Under constrained viewing cross-modal object recognition is most efficient when the same surface of an object is presented to the visual and haptic systems [5]. Here we tested cross modal recognition of novel objects under active manipulation and unconstrained viewing of the objects. These objects were designed such that each surface of the object provided unique information. In Experiment 1, participants were allowed 30 seconds to learn the objects visually or haptically. Haptic learning resulted in relatively poor haptic recognition performance relative to visual recognition. In Experiment 2, we increased the learning time for haptic exploration and found equivalent haptic and visual recognition, but a cost in cross modal recognition. In Experiment 3, participants learned the objects using both modalities together, vision alone or haptics alone. Recognition performance was tested using both modalities together. We found that recognition performance was significantly better when objects were learned by both modalities than either of the modalities alone. Our results suggest that efficient cross modal performance depends on the spatial correspondence of object surface information across modalities.