English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Mechanisms of enhancing visual-speech recognition by prior auditory information

Blank, H., & von Kriegstein, K. (2013). Mechanisms of enhancing visual-speech recognition by prior auditory information. NeuroImage, 65, 109-118. doi:10.1016/j.neuroimage.2012.09.047.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/11858/00-001M-0000-000F-ED9B-8 Version Permalink: http://hdl.handle.net/21.11116/0000-0003-AFDF-E
Genre: Journal Article

Files

show Files
hide Files
:
Blank_2013_Mechanisms.pdf (Publisher version), 2MB
 
File Permalink:
-
Name:
Blank_2013_Mechanisms.pdf
Description:
-
Visibility:
Private
MIME-Type / Checksum:
application/pdf
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Blank, Helen1, Author              
von Kriegstein, Katharina1, Author              
Affiliations:
1Max Planck Research Group Neural Mechanisms of Human Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_634556              

Content

show
hide
Free keywords: fMRI; Lip-reading; Multisensory; Predictive coding; Speech reading
 Abstract: Speech recognition from visual-only faces is difficult, but can be improved by prior information about what is said. Here, we investigated how the human brain uses prior information from auditory speech to improve visual–speech recognition. In a functional magnetic resonance imaging study, participants performed a visual–speech recognition task, indicating whether the word spoken in visual-only videos matched the preceding auditory-only speech, and a control task (face-identity recognition) containing exactly the same stimuli. We localized a visual–speech processing network by contrasting activity during visual–speech recognition with the control task. Within this network, the left posterior superior temporal sulcus (STS) showed increased activity and interacted with auditory–speech areas if prior information from auditory speech did not match the visual speech. This mismatch-related activity and the functional connectivity to auditory–speech areas were specific for speech, i.e., they were not present in the control task. The mismatch-related activity correlated positively with performance, indicating that posterior STS was behaviorally relevant for visual–speech recognition. In line with predictive coding frameworks, these findings suggest that prediction error signals are produced if visually presented speech does not match the prediction from preceding auditory speech, and that this mechanism plays a role in optimizing visual–speech recognition by prior information.

Details

show
hide
Language(s): eng - English
 Dates: 2012-09-202012-09-272013-01-15
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Method: Peer
 Identifiers: DOI: 10.1016/j.neuroimage.2012.09.047
PMID: 23023154
Other: Epub 2012
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: NeuroImage
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Orlando, FL : Academic Press
Pages: - Volume / Issue: 65 Sequence Number: - Start / End Page: 109 - 118 Identifier: ISSN: 1053-8119
CoNE: https://pure.mpg.de/cone/journals/resource/954922650166