English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Examining Egocentric Spatial Representations Referenced to Head and Body in the Healthy Brain

MPS-Authors
/persons/resource/persons84189

Schindler,  A
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83797

Bartels,  A
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource

Link
(Any fulltext)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Schindler, A., & Bartels, A. (2017). Examining Egocentric Spatial Representations Referenced to Head and Body in the Healthy Brain. Poster presented at 23rd Annual Meeting of the Organization for Human Brain Mapping (OHBM 2017), Vancouver, BC, Canada.


Cite as: https://hdl.handle.net/21.11116/0000-0000-C45D-B
Abstract
Introduction: Spatial representations in distinct reference frames are essential for human behavior. Visual input is received in retinotopic coordinates. Our conscious experience of the environment is however non-retinotopic as eye- and head-movements are typically subtracted from retinal input to provide us with world-centered perceptual stability. Our immediate interactions with the external world occur in self-centered reference frames. Limb-actions are inherently body-centered and need to be invariant to eye- or head-movements. In turn, head- or body-movements shift different parts of the environment in and out of the field of view. Egocentric maps of our full surroundings are thus not only essential for the integration between different senses such as audition, vision, and touch but also assure the stable phenomenal experience of our surrounding based on continual updating of internal spatial maps in the face of changing sensory input. In the present study, we used a novel virtual reality paradigm that involved tilted head-positions during fMRI and multi-variate analyses to identify head- and body-centered representations, as well as attention modulation thereof. Methods: Classically, the participant's fixed head and body orientation inside the scanner prevents disentangling head- from body-centered reference systems. To circumvent this problem, we used a modified version of a virtual reality paradigm we introduced previously (Schindler Bartels 2013). Participants had to imagine the location of six distinct objects surrounding them arranged in a hexagon, including in front of them and behind them. Importantly, participants' heads were rotated by +60° or -60° in different conditions, such that head and body axes were misaligned. This paradigm allowed us to systematically disentangle head- from body- centered neural representations. In addition, participants performed this task in two distinct attention sets, either involving imagery in head- or body-centered coordinates. This allowed us to probe modulation of head- and body-centered spatial representations by attention to either reference frame. Participants underwent several days of extensive training in which they reached ceiling performance in learning the object locations within the surrounding hexagon. Training was performed outside the scanner using virtual reality goggles. Participants were placed in the center of a virtual hexagonal room that contained a unique object in each corner. Every few trials the participants' viewpoint rotated such that they faced a different corner, i.e., a different allocentric location. This allowed us to isolate six abstract egocentric directions, regardless of the identity of reference objects or of allocentric representations. When performance reached criterion, participants were invited for fMRI scanning and performed a modified task inside the scanner. Using multivariate voxel analysis, we identified egocentric representations beyond the visual field according to body and head coordinates. Results: We found significant decoding of egocentric directions in head- as well as in body-centered reference frames in a network of brain areas associated to spatial processing, attention, and lesion-sites of spatial neglect patients. Among those were pre-cuneus, prefrontal cortex, and parietal cortex. Whereas egocentric codes for body- and head-centered representations overlapped in most regions, we also found biases towards either reference frame in some regions. Attention to body- or head-centered coordinates tended to modulate the decoding accuracy in favor of the respective representation. Conclusions: Our results provide evidence for the presence of both, head- and body-centered neural spatial representations in the human brain. While most of these representations appear to be co-localized, a subset of brain areas was tuned to head or body coordinates. In addition, our results show that the distinct representations can be modulated by attention.