English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Multisensory perception of actively explored objects

Newell, F., Bülthoff, H., & Ernst, M. (2003). Multisensory perception of actively explored objects. In 4th International Multisensory Research Forum (IMRF 2003).

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/11858/00-001M-0000-0013-DC75-B Version Permalink: http://hdl.handle.net/21.11116/0000-0006-C371-E
Genre: Meeting Abstract

Files

show Files

Creators

show
hide
 Creators:
Newell, FN, Author              
Bülthoff, HH1, 2, Author              
Ernst, M1, 2, Author              
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: Many objects in our world can be picked up and freely manipulated, thus allowing information about an object to be available to both the visual and haptic systems. However, we understand very little about how object information is shared across the modalities. Under constrained viewing cross-modal object recognition is most efficient when the same surface of an object is presented to the visual and haptic systems (Newell et al. 2001). Here we tested cross modal recognition under active manipulation and unconstrained viewing of the objects. In Experiment 1, participants were allowed 30 seconds to learn unfamiliar objects visually or haptically. Haptic learning resulted in relatively poor haptic recogition performance relative to visual recognition. In Experiment 2, we increased the learning time for haptic exploration and found equivalent haptic and visual recognition, but a cost in cross modal recognition. In Experiment 3, participants learned the objects using both modalities together, vision alone or haptics alone. Recognition performance was tested using both modalities together. We found that recognition performance was significantly better when objects were learned by both modalities than either of the modalities alone. Our results suggest that efficient cross modal performance depends on the spatial correspondence of object information across modalities.

Details

show
hide
Language(s):
 Dates: 2003-06
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: BibTex Citekey: NewellBE2003
 Degree: -

Event

show
hide
Title: 4th International Multisensory Research Forum (IMRF 2003)
Place of Event: Hamilton, Canada
Start-/End Date: 2003-06-14 - 2003-06-17

Legal Case

show

Project information

show

Source 1

show
hide
Title: 4th International Multisensory Research Forum (IMRF 2003)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: 76 Start / End Page: - Identifier: -