日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

ポスター

Cross-modal integration of visual and haptic information for object recognition: Effects of view changes and shape similarity

MPS-Authors
/persons/resource/persons84940

Lawson,  R
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Lawson, R., & Bülthoff, H. (2008). Cross-modal integration of visual and haptic information for object recognition: Effects of view changes and shape similarity. Poster presented at 9th International Multisensory Research Forum (IMRF 2008), Hamburg, Germany.


引用: https://hdl.handle.net/11858/00-001M-0000-0013-C857-1
要旨
Four studies contrasted cross-modal object matching (visual to haptic and haptic to visual) with uni-modal matching (visual-visual and haptic-haptic). The stimuli were hand-sized, plastic models of familiar objects. There were twenty pairs of similarly-shaped objects (cup/jug; frog/lizard, spoon/knife, etc.) and a morph midway in shape between each pair. Objects at fixed orientations were presented sequentially behind an LCD screen. The screen was opaque for haptic inputs and clear for visual presentations. We tested whether a 90º depth rotation from the first to the second object impaired people’s ability to detect shape changes. This achievement of object constancy over view changes was examined across different levels of task difficulty. Difficulty was varied between groups by manipulating shape similarity on mismatch trials. First, view changes from the first to the second object impaired performance in all conditions except haptic to visual matching. Second, for visual-visual matches only, these disruptive effects of task-irrelevant rotations were greater when the task was harder due to increased shape similarity on mismatches. Viewpoint thus influenced both visual and haptic object identification but its effects differed across modalities and for unimodal versus crossmodal matching. These results suggest that the effects of view changes are caused by modality-specific processes.