English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Visual mechanisms for voice‐identity recognition flexibly adjust to auditory noise level

Maguinness, C., & von Kriegstein, K. (2021). Visual mechanisms for voice‐identity recognition flexibly adjust to auditory noise level. Human Brain Mapping, 42(12), 3963-3982. doi:10.1002/hbm.25532.

Item is

Basic

show hide
Genre: Journal Article

Files

show Files
hide Files
:
Maguinnes_2021.pdf (Publisher version), 4MB
Name:
Maguinnes_2021.pdf
Description:
-
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Maguinness, Corrina1, 2, Author              
von Kriegstein, Katharina1, 2, Author              
Affiliations:
1Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, TU Dresden, Germany, ou_persistent22              
2Max Planck Research Group Neural Mechanisms of Human Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_634556              

Content

show
hide
Free keywords: FFA; Audio-visual; Motion; Multisensory; pSTS; Predictive coding; Voice-identity
 Abstract: Recognising the identity of voices is a key ingredient of communication. Visual mechanisms support this ability: recognition is better for voices previously learned with their corresponding face (compared to a control condition). This so-called 'face-benefit' is supported by the fusiform face area (FFA), a region sensitive to facial form and identity. Behavioural findings indicate that the face-benefit increases in noisy listening conditions. The neural mechanisms for this increase are unknown. Here, using functional magnetic resonance imaging, we examined responses in face-sensitive regions while participants recognised the identity of auditory-only speakers (previously learned by face) in high (SNR -4 dB) and low (SNR +4 dB) levels of auditory noise. We observed a face-benefit in both noise levels, for most participants (16 of 21). In high-noise, the recognition of face-learned speakers engaged the right posterior superior temporal sulcus motion-sensitive face area (pSTS-mFA), a region implicated in the processing of dynamic facial cues. The face-benefit in high-noise also correlated positively with increased functional connectivity between this region and voice-sensitive regions in the temporal lobe in the group of 16 participants with a behavioural face-benefit. In low-noise, the face-benefit was robustly associated with increased responses in the FFA and to a lesser extent the right pSTS-mFA. The findings highlight the remarkably adaptive nature of the visual network supporting voice-identity recognition in auditory-only listening conditions.

Details

show
hide
Language(s): eng - English
 Dates: 2021-05-272021-08-15
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1002/hbm.25532
Other: epub 2021
PMID: 34043249
 Degree: -

Event

show

Legal Case

show

Project information

show hide
Project name : -
Grant ID : KR 3735/5-1
Funding program : -
Funding organization : Deutsche Forschungsgemeinschaft (DFG)
Project name : -
Grant ID : SENSOCOM 647051
Funding program : Horizon 2020
Funding organization : European Research Council

Source 1

show
hide
Title: Human Brain Mapping
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: New York : Wiley-Liss
Pages: - Volume / Issue: 42 (12) Sequence Number: - Start / End Page: 3963 - 3982 Identifier: ISSN: 1065-9471
CoNE: https://pure.mpg.de/cone/journals/resource/954925601686