Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

Temporal processing of self-motion: modeling reaction times for rotations and translations

MPG-Autoren
/persons/resource/persons84229

Soyka,  F
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83796

Barnett-Cowan,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Soyka, F., Bülthoff, H., & Barnett-Cowan, M. (2013). Temporal processing of self-motion: modeling reaction times for rotations and translations. Experimental Brain Research, 228(1), 51-62. doi:10.1007/s00221-013-3536-y.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-001A-140E-F
Zusammenfassung
In this paper, we show that differences in reaction times (RT) to self-motion depend not only on the duration of the profile, but also on the actual time course of the acceleration. We previously proposed models that described direction discrimination thresholds for rotational and translational motions based on the dynamics of the vestibular sensory organs (otoliths and semi-circular canals). As these models have the potential to describe RT for different motion profiles (e.g., trapezoidal versus triangular acceleration profiles or varying profile durations), we validated these models by measuring RTs in human observers for a direction discrimination task using both translational and rotational motions varying in amplitude, duration and acceleration profile shape in a within-subjects design. In agreement with previous studies, amplitude and duration were found to affect RT, and importantly, we found an influence of the profile shape on RT. The models are able to fit the measured RTs with an accuracy of around 5 ms, and the best-fitting parameters are similar to those found from identifying the models based on threshold measurements. This confirms the validity of the modeling approach and links perceptual thresholds to RT. By establishing a link between vestibular thresholds for self-motion and RT, we show for the first time that RTs to purely inertial motion stimuli can be used as an alternative to threshold measurements for identifying self-motion perception models. This is advantageous, since RT tasks are less challenging for participants and make assessment of vestibular function less fatiguing. Further, our results provide strong evidence that the perceived timing of self-motion stimulation is largely influenced by the response dynamics of the vestibular sensory organs.