English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Meeting Abstract

Dorsal face-movement and ventral face-form regions are functionally connected during visual-speech recognition

MPS-Authors
/persons/resource/persons103142

Borowiak,  Kamila
Max Planck Research Group Neural Mechanisms of Human Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
TU Dresden, Germany;
Berlin School of Mind and Brain, Humboldt University Berlin, Germany;

/persons/resource/persons20071

von Kriegstein,  Katharina
Max Planck Research Group Neural Mechanisms of Human Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
TU Dresden, Germany;

External Resource
No external resources are shared
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Borowiak, K., & von Kriegstein, K. (2019). Dorsal face-movement and ventral face-form regions are functionally connected during visual-speech recognition. Journal of Vision, 19(10): 183a. doi:10.1167/19.10.183a.


Cite as: http://hdl.handle.net/21.11116/0000-0005-1A5A-A
Abstract
Facial emotion perception involves functional connectivity between dorsal-movement and ventral-form brain regions (Furl et al., 2014, Cereb. Cortex; Foley et al., 2012, J. Cogn. Neurosci.). Here, we tested the hypothesis that such connectivity also exists for visual-speech processing and explored how it is related to impaired visual-speech recognition in high-functioning autism spectrum disorder (ASD) (Borowiak et al., 2018, Neuroimage Clin.; Schelinski et al., 2014, Neuropsychologia). Seventeen typically developed adults (control group) and seventeen adults with high-functioning ASD (ASD group) participated. Groups were matched pairwise on age, gender, handedness and intelligence quotient. The study included a combined functional magnetic resonance imaging (fMRI) and eye-tracking experiment on visual-speech recognition, a functional localizer and behavioral assessments of face recognition abilities. In the visual-speech recognition experiment, participants viewed blocks of muted videos of speakers articulating syllables. Before each block, participants were instructed to recognize either the articulated syllable (visual-speech task) or the identity of the articulating person (face-identity task). Functional connectivity was assessed using psycho-physiological interaction analysis (PPI) based on the contrast “visual-speech task > face-identity task”. The functional localizer was used to localize seed regions in the individual dorsal-movement regions (visual motion area 5 (V5/MT), temporal visual speech area (TVSA)) and target regions in the ventral-form regions (occipital face area (OFA), fusiform face area (FFA)). In both groups, dorsal-movement regions were functionally connected to ventral-form regions during visual-speech vs. face-identity recognition (p < .0125 FWE corrected). Part of this connectivity was decreased in the ASD group compared to the control group (i.e., right V5/MT- right OFA, left TVSA – left FFA). The results confirmed our hypothesis that functional connectivity between dorsal-movement and ventral-form regions exists during visual-speech processing, but parts of it are reduced in ASD.