English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception

Schall, S., & von Kriegstein, K. (2014). Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception. PLoS One, 9(1): e86325. doi:10.1371/journal.pone.0086325.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/11858/00-001M-0000-0015-83BF-3 Version Permalink: http://hdl.handle.net/21.11116/0000-0003-81CD-4
Genre: Journal Article

Files

show Files
hide Files
:
Schall_FunctionalConnectivity.pdf (Publisher version), 500KB
Name:
Schall_FunctionalConnectivity.pdf
Description:
-
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
© 2014 Schall, von Kriegstein. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Locators

show

Creators

show
hide
 Creators:
Schall, Sonja1, Author              
von Kriegstein, Katharina1, 2, Author              
Affiliations:
1Max Planck Research Group Neural Mechanisms of Human Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_634556              
2Humboldt University Berlin, Germany, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

Details

show
hide
Language(s): eng - English
 Dates: 2013-07-112013-12-062014-01-23
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1371/journal.pone.0086325
PMID: 24466026
PMC: PMC3900530
Other: eCollection 2014
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: PLoS One
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: San Francisco, CA : Public Library of Science
Pages: - Volume / Issue: 9 (1) Sequence Number: e86325 Start / End Page: - Identifier: ISSN: 1932-6203
CoNE: https://pure.mpg.de/cone/journals/resource/1000000000277850