English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Cognitive factors facilitate multimodal integration

MPS-Authors
/persons/resource/persons83960

Helbig,  HB
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83906

Ernst,  MO
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Helbig, H., & Ernst, M. (2005). Cognitive factors facilitate multimodal integration. Poster presented at 8th Tübinger Wahrnehmungskonferenz (TWK 2005), Tübingen, Germany.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-D63B-0
Abstract
Ernst Banks (2002) showed that humans integrate visual and haptic signals in a statistically optimal way if they are derived from the same spatial location. Integration seems to be broken if there is a spatial discrepancy between the signals (Gepshtein et al., in press). Can cognitive factors facilitate integration even when the signals are presented at two spatial locations? We conducted two experiments, one in which visual and haptic information was presented at the same location. In the second experiment, subject looked at the object through a mirror while touching it. This way there was a spatial offset between the two information sources. If cognitive factors are sufficient for integration to occur, i.e. knowledge that the object seen in the mirror is the same as the one touched, we expect no difference between the two experimental results. If integration breaks due to the spatial discrepancy, we expect subjects’ percept to be less biased by multimodal information. To study integration, participants looked at an object through a distortion lens. This way, for both the “mirrored” and “direct vision” conditions, there was a slight shape conflict between the visual and haptic modalities. After looking at and feeling the object simultaneously participants reported the perceived shape by either visually or haptically matching it to a reference object. Both experiments revealed that the shape percept was in-between the haptically and visually specified shapes. Importantly, there was no significant difference between the two experimental results regardless of whether subjects matched the shape visually or haptically. However, we found a significant difference between matching by touch and matching by vision. Haptic judgments are biased towards the haptic input and vice versa. In conclusion, multimodal signals seem to be combined if observers have high-level cognitive knowledge about the signals belonging to the same object, even when there is a spatial discrepancy.