Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

How do we recognise who is speaking?

MPG-Autoren

Mathias,  Samuel R.
Max Planck Research Group Neural Mechanisms of Human Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
Center for Computational Neuroscience and Neurotechnology, Boston University, MA, USA;

/persons/resource/persons20071

von Kriegstein,  Katharina
Max Planck Research Group Neural Mechanisms of Human Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
Department of Psychology, Humboldt University Berlin, Germany;

Externe Ressourcen

https://doi.org/10.2741/S417
(Verlagsversion)

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Mathias, S. R., & von Kriegstein, K. (2014). How do we recognise who is speaking? Frontiers in Bioscience, S6, 92-109. doi:10.2741/S417.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0019-E9AE-7
Zusammenfassung
The human brain effortlessly extracts a wealth of information from natural speech, which allows the listener to both understand the speech message and recognise who is speaking. This article reviews behavioural and neuroscientific work that has attempted to characterise how listeners achieve speaker recognition. Behavioural studies suggest that the action of a speaker's glottal folds and the overall length of their vocal tract carry important voice-quality information. Although these cues are useful for discriminating and recognising speakers under certain circumstances, listeners may use virtually any systematic feature for recognition. Neuroscientific studies have revealed that speaker recognition relies upon a predominantly right-lateralised network of brain regions. Specifically, the posterior parts of superior temporal sulcus appear to perform some of the acoustical analyses necessary for the perception of speaker and message, whilst anterior portions may play a more abstract role in perceiving speaker identity. This voice-processing network is supported by direct, early connections to non-auditory regions, such as the visual face-sensitive area in the fusiform gyrus, which may serve to optimize person recognition.