Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Visual-Vestibular Cue Combination during Temporal Asynchrony

MPG-Autoren
/persons/resource/persons84378

Campos,  JL
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83842

Butler,  JS
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Campos, J., Butler, J., & Bülthoff, H. (2009). Visual-Vestibular Cue Combination during Temporal Asynchrony. Poster presented at 10th International Multisensory Research Forum (IMRF 2009), New York, NY, USA.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-C435-2
Zusammenfassung
Currently little is known about the principles underlying human visual-vestibular integration during self-motion. Previous work from our lab has shown that 3D visual information combines with vestibular cues in a statistically optimal fashion, even when spatial offsets between the two cues are introduced. In this experiment we extended this research question by evaluating the effects of creating temporal offsets between visual and vestibular cues during a heading judgment task. This experiment was conducted using a Stewart motion platform equipped with a 90 degree, wide field-of-view projection screen. Participants were presented with a linear, diagonal movement and asked whether they were heading in a rightward or leftward direction relative to their starting position. Self-motion information was either presented via visual cues alone, vestibular cues alone, or both cues combined. In the combined condition the two cues were either congruent (1/7 trials) or incongruent in their temporal order. The temporal offsets ranged from -0.5 s (visual motion started before vestibular motion) to +0.5s (vestibular motion started before visual motion). The temporal offsets were presented in a random order and participants completed 6 daily sessions of 1.5 hours each. Results demonstrate that, for the first half of the trials, the highest variance was observed in the extreme temporal offset trials (+/- 0.5s). However, for the last half of the trials, the highest variance was actually observed in the congruent cue trials, with the lowest variance observed for the extreme temporal offsets. These findings indicate that, the way in which visual and vestibular information is combined changes dynamically as a function of increased exposure to discrepant cue information.