Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Combining sensory cues for spatial orientation: Assessing the contribution of different modalities in the facilitation of mental rotations

MPG-Autoren
/persons/resource/persons84281

Vidal,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83840

Bülthoff,  I
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Lehmann, A., Vidal, M., & Bülthoff, I. (2007). Combining sensory cues for spatial orientation: Assessing the contribution of different modalities in the facilitation of mental rotations. Poster presented at 8th International Multisensory Research Forum (IMRF 2007), Sydney, Australia.


Zitierlink: https://hdl.handle.net/21.11116/0000-0003-F4C1-F
Zusammenfassung
Mental rotation is the capacity to predict the outcome of spatial relationships after a change in viewpoint. Previous studies have shown that the cognitive cost of mental rotations is reduced when viewpoint change results from the observer's motion rather than the spatial layouts, which is explained by automatic updating mechanisms involved during self-motion. Nevertheless, little is known about how this process is triggered and particularly how sensory cues combine in order to enhance mental rotations. We developed a high-end virtual reality setup that for the first time allowed, with a series of experiments, to dissociate each modality possibly stimulated during viewpoint changes. At first, we validated this setup by replicating the classical advantage found for a moving observer. Secondly, we found that enhancing the spatial binding possibilities, by displaying the table during its rotation was not sufficient to significantly reduce the mental rotation cost. Thirdly, we found that mental rotations are not significantly improved with a single modality being stimulated during the observer's motion (vision or body), whereas they are with a combination of two modalities (body & vision or body & sound). These results are discussed in terms of sensory-independent triggering of spatial updating during self-motion, with non-linear effects when sensory modalities are co-activated.