Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Mechanisms of enhancing visual-speech recognition by prior auditory information

Blank, H., & von Kriegstein, K. (2013). Mechanisms of enhancing visual-speech recognition by prior auditory information. NeuroImage, 65, 109-118. doi:10.1016/j.neuroimage.2012.09.047.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Zeitschriftenartikel

Dateien

einblenden: Dateien
ausblenden: Dateien
:
Blank_2013_Mechanisms.pdf (Verlagsversion), 2MB
 
Datei-Permalink:
-
Name:
Blank_2013_Mechanisms.pdf
Beschreibung:
-
OA-Status:
Sichtbarkeit:
Privat
MIME-Typ / Prüfsumme:
application/pdf
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-
Lizenz:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Blank, Helen1, Autor           
von Kriegstein, Katharina1, Autor           
Affiliations:
1Max Planck Research Group Neural Mechanisms of Human Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_634556              

Inhalt

einblenden:
ausblenden:
Schlagwörter: fMRI; Lip-reading; Multisensory; Predictive coding; Speech reading
 Zusammenfassung: Speech recognition from visual-only faces is difficult, but can be improved by prior information about what is said. Here, we investigated how the human brain uses prior information from auditory speech to improve visual–speech recognition. In a functional magnetic resonance imaging study, participants performed a visual–speech recognition task, indicating whether the word spoken in visual-only videos matched the preceding auditory-only speech, and a control task (face-identity recognition) containing exactly the same stimuli. We localized a visual–speech processing network by contrasting activity during visual–speech recognition with the control task. Within this network, the left posterior superior temporal sulcus (STS) showed increased activity and interacted with auditory–speech areas if prior information from auditory speech did not match the visual speech. This mismatch-related activity and the functional connectivity to auditory–speech areas were specific for speech, i.e., they were not present in the control task. The mismatch-related activity correlated positively with performance, indicating that posterior STS was behaviorally relevant for visual–speech recognition. In line with predictive coding frameworks, these findings suggest that prediction error signals are produced if visually presented speech does not match the prediction from preceding auditory speech, and that this mechanism plays a role in optimizing visual–speech recognition by prior information.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2012-09-202012-09-272013-01-15
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: Expertenbegutachtung
 Identifikatoren: DOI: 10.1016/j.neuroimage.2012.09.047
PMID: 23023154
Anderer: Epub 2012
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: NeuroImage
Genre der Quelle: Zeitschrift
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: Orlando, FL : Academic Press
Seiten: - Band / Heft: 65 Artikelnummer: - Start- / Endseite: 109 - 118 Identifikator: ISSN: 1053-8119
CoNE: https://pure.mpg.de/cone/journals/resource/954922650166