English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

A stereo advantage in generalizing over changes in viewpoint on object recognition tasks

MPS-Authors
/persons/resource/persons84291

Vuong,  QC
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Bennett, D., & Vuong, Q. (2006). A stereo advantage in generalizing over changes in viewpoint on object recognition tasks. Perception and Psychophysics, 68(7), 1082-1093. Retrieved from http://www.ingentaconnect.com/content/psocpubs/prp/2006/00000068/00000007/art00003?token=004713c23bb405847447b23566c2473386f384779523633757e6f4f2858592f3f3b57a7.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-CFB5-A
Abstract
Four experiments examined whether generalization to unfamiliar views was better under stereo viewing as opposed to nonstereo viewing across different tasks and stimuli. The first three experiments used a sequential matching task in which observers matched the identity of shaded tube-like objects. Across Experiments 1-3, we manipulated the presentation method of the nonstereo stimuli (eye-patch versus showing the same screen image) and the magnitude of the viewpoint change (30° versus 38°). In Experiment 4, observers identified “easy” and “hard” rotating wireframe objects at the individual level under stereo and nonstereo viewing conditions. We found a stereo advantage for generalizing to unfamiliar views in all experiments. However, in these experiments, performance remained view-dependent even under stereo viewing. These results strongly argue against strictly 2D image-based models of object recognition, at least for the stimuli and recognition tasks used, and they suggest that observers used representations that contained view-specific local depth information.