Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Human V6 integrates visual and extra-retinal cues during head induced gaze shifts

MPG-Autoren
/persons/resource/persons84189

Schindler,  A
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83797

Bartels,  A
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Schindler, A., & Bartels, A. (2018). Human V6 integrates visual and extra-retinal cues during head induced gaze shifts. Poster presented at 48th Annual Meeting of the Society for Neuroscience (Neuroscience 2018), San Diego, CA, USA.


Zitierlink: https://hdl.handle.net/21.11116/0000-0002-609E-0
Zusammenfassung
A key question in vision research concerns visual stability: how is visual information in retinal coordinates integrated with non-visual cues of self-induced motion to form the spatiotopic representations of the world that we perceive?
Eye movements have been found to modulate retinotopic representations at multiple stages along the visual stream, yet a special role has been attributed to human areas V3A and V6, as both cancel self-induced retinal planar motion during eye movements to a near complete extent (Fischer, Bülthoff, Logothetis, Bartels, 2012).
Beyond that, only little is known about which human visual processing stages integrate head motion signals with retinotopic representations as human fMRI is typically incompatible with execution of voluntary head movements.
We recently circumvented these limitations and introduced a novel paradigm that allows participants to move their heads during fMRI scanning (Schindler and Bartels, 2018). The functional characteristics of the BOLD signal allowed us to temporally decouple stimulus presentation from the acquisition of stimulus evoked responses. Our custom-built air pressure based head-stabilization system permitted head-rotation during trials, but stabilized head position during data acquisition. Video-based head-tracking and head-mounted goggles allowed for real-time generation of visual stimuli taking head-motion into account.
Observers viewed approaching visual flow through head-mounted MR-compatible goggles. A congruent condition simulated constant forward motion while the observer rotated the head relative to the body, as when looking around while being driven along a straight road. In the incongruent condition, observers performed identical head rotations, but the visual consequences were inversed such that visual and extra-retinal cues did not combine in any meaningful way. Crucially, both conditions were matched regarding head and retinal motion. Based on this paradigm, we previously examined the integration of head motion and visual signals in regions with established vestibular processing.
Here we asked whether early visual cortex as well as areas V3A and V6 may integrate retinotopic visual representations with voluntary head motion. Contrasting congruent versus incongruent conditions revealed differential responses in human V6 but not in early visual regions or V3A, consistent with multi-modal integration of visual cues with head motion in human area V6.
Our results extend previous evidence for multimodal integration in V6 to head-motion cues and are in line with the hypothesis of V6 as a crucial hub for compensation of self-induced motion.