English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Visual face-movement sensitive cortex is relevant for auditory-only speech recognition

MPS-Authors
/persons/resource/persons19945

Riedel,  Philipp
Max Planck Research Group Neural Mechanisms of Human Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus, Dresden, Germany;
Neuroimaging Center, TU Dresden, Germany;

/persons/resource/persons19935

Ragert,  Patrick
Department Neurology, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

/persons/resource/persons22943

Schelinski,  Stefanie
Max Planck Research Group Neural Mechanisms of Human Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

/persons/resource/persons19770

Kiebel,  Stefan J.
Department Neurology, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
Neuroimaging Center, TU Dresden, Germany;

/persons/resource/persons20071

von Kriegstein,  Katharina
Max Planck Research Group Neural Mechanisms of Human Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
Department of Psychology, Humboldt University Berlin, Germany;

External Ressource
No external resources are shared
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Riedel, P., Ragert, P., Schelinski, S., Kiebel, S. J., & von Kriegstein, K. (2015). Visual face-movement sensitive cortex is relevant for auditory-only speech recognition. Cortex, 68, 86-99. doi:10.1016/j.cortex.2014.11.016.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0025-79CC-0
Abstract
It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks (‘auditory-only view’). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance (‘auditory-visual view’). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the ‘auditory-visual view’ of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models.