Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

Is the voice an auditory face?: An ALE meta-analysis comparing vocal and facial emotion processing

MPG-Autoren
/persons/resource/persons19971

Schirmer,  Annett
Department of Psychology, Chinese University of Hong Kong, China;
Department Neuropsychology, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

Schirmer_2017.pdf
(Verlagsversion), 376KB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Schirmer, A. (2018). Is the voice an auditory face?: An ALE meta-analysis comparing vocal and facial emotion processing. Social Cognitive and Affective Neuroscience, 13(1), 1-13. doi:10.1093/scan/nsx142.


Zitierlink: https://hdl.handle.net/21.11116/0000-0000-2FD1-0
Zusammenfassung
This meta-analysis compares the brain structures and mechanisms involved in facial and vocal emotion recognition. Neuroimaging studies contrasting emotional with neutral (face: N = 76, voice: N = 34) and explicit with implicit emotion processing (face: N = 27, voice: N = 20) were collected to shed light on stimulus and goal-driven mechanisms, respectively. Activation likelihood estimations were conducted on the full data sets for the separate modalities and on reduced, modality-matched data sets for modality comparison. Stimulus-driven emotion processing engaged large networks with significant modality differences in the superior temporal (voice-specific) and the medial temporal (face-specific) cortex. Goal-driven processing was associated with only a small cluster in the dorsomedial prefrontal cortex for voices but not faces. Neither stimulus- nor goal-driven processing showed significant modality overlap. Together, these findings suggest that stimulus-driven processes shape activity in the social brain more powerfully than goal-driven processes in both the visual and the auditory domains. Yet, whereas faces emphasize subcortical emotional and mnemonic mechanisms, voices emphasize cortical mechanisms associated with perception and effortful stimulus evaluation (e.g. via subvocalization). These differences may be due to sensory stimulus properties and highlight the need for a modality-specific perspective when modeling emotion processing in the brain.