English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Looking in the mirror does not prevent multimodal integration

Helbig, H., & Ernst, M. (2005). Looking in the mirror does not prevent multimodal integration. Poster presented at Fifth Annual Meeting of the Vision Sciences Society (VSS 2005), Sarasota, FL, USA.

Item is

Files

show Files

Locators

show
hide
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Helbig, HB1, 2, Author           
Ernst, MO1, 2, Author           
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: Ernst Banks (2002) showed that humans integrate visual and haptic signals in a statistically optimal way if they are derived from the same spatial location. Integration seems to be broken if there is a spatial discrepancy between the signals (Gepshtein et al., VSS 04).

Can cognitive factors facilitate integration even when the signals are presented at two spatial locations? We conducted two experiments, one in which visual and haptic information was presented at the same location. In the second experiment, subject looked at the object through a mirror while touching it. This way there was a spatial offset between the two information sources. If cognitive factors are sufficient for integration to occur, i.e. knowledge that the object seen in the mirror is the same as the one touched, we expect no difference between the two experimental results. If integration breaks due to the spatial discrepancy we expect subjects' percept to be less biased by multimodal information.

To study integration participants looked at an object through a distortion lens. This way, for both the “mirrored” and “direct vision” conditions there was a slight shape conflict between the visual and haptic modalities. After looking at and feeling the object simultaneously participants reported the perceived shape by either visually or haptically matching it to a reference object.

Both experiments revealed that the shape percept was in-between the haptically and visually specified shapes. Importantly, there was no significant difference between the two experimental results regardless of whether subjects matched the shape visually or haptically. However, we found a significant difference between matching by touch and matching by vision. Haptic judgments are biased towards the haptic input and vice versa.

In conclusion, multimodal signals seem to be combined if observers have high-level cognitive knowledge about the signals belonging to the same object, even when there is a spatial discrepancy.

Details

show
hide
Language(s):
 Dates: 2005-09
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1167/5.8.750
BibTex Citekey: 3495
 Degree: -

Event

show
hide
Title: Fifth Annual Meeting of the Vision Sciences Society (VSS 2005)
Place of Event: Sarasota, FL, USA
Start-/End Date: 2005-05-06 - 2005-05-11

Legal Case

show

Project information

show

Source 1

show
hide
Title: Journal of Vision
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Charlottesville, VA : Scholar One, Inc.
Pages: - Volume / Issue: 5 (8) Sequence Number: - Start / End Page: 750 Identifier: ISSN: 1534-7362
CoNE: https://pure.mpg.de/cone/journals/resource/111061245811050