English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Visual, haptic and crossmodal recognition of scenes

MPS-Authors
/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Newell, F., Woods, A., Mernagh, M., & Bülthoff, H. (2004). Visual, haptic and crossmodal recognition of scenes. Experimental Brain Research, 161(2), 233-242. doi:10.1007/s00221-004-2067-y.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-D785-2
Abstract
Real world scene perception can often involve more than one sensory modality. Here we
investigated the visual, haptic and crossmodal recognition of scenes of familiar objects. In three experiments participants first learned a scene of objects arranged in random positions on a platform. After learning, the experimenter swapped the position of two objects in the scene and the task for the participant was to identify the two swapped objects. In Experiment 1, we found a cost in scene recognition performance when there was a change in sensory modality and scene orientation between learning and test.
The cost in crossmodal performance was not due to the participants verbally encoding the objects
(Experiment 2) or by differences between serial and parallel encoding of the objects during haptic and
visual learning respectively (Experiment 3). Instead, our findings suggest that differences between visual and haptic representations of space may affect the recognition of scenes of objects across these modalities.