English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Is the voice an auditory face? An ALE meta-analysis comparing vocal and facial emotion processing

MPS-Authors
/persons/resource/persons19971

Schirmer,  Annett
Department of Psychology, Chinese University of Hong Kong, China;
Department Neuropsychology, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

Locator
There are no locators available
Fulltext (public)

Schirmer_2017.pdf
(Publisher version), 376KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Schirmer, A. (2018). Is the voice an auditory face? An ALE meta-analysis comparing vocal and facial emotion processing. Social Cognitive and Affective Neuroscience, 13(1), 1-13. doi:10.1093/scan/nsx142.


Cite as: http://hdl.handle.net/21.11116/0000-0000-2FD1-0
Abstract
This meta-analysis compares the brain structures and mechanisms involved in facial and vocal emotion recognition. Neuroimaging studies contrasting emotional with neutral (face: N = 76, voice: N = 34) and explicit with implicit emotion processing (face: N = 27, voice: N = 20) were collected to shed light on stimulus and goal-driven mechanisms, respectively. Activation likelihood estimations were conducted on the full data sets for the separate modalities and on reduced, modality-matched data sets for modality comparison. Stimulus-driven emotion processing engaged large networks with significant modality differences in the superior temporal (voice-specific) and the medial temporal (face-specific) cortex. Goal-driven processing was associated with only a small cluster in the dorsomedial prefrontal cortex for voices but not faces. Neither stimulus- nor goal-driven processing showed significant modality overlap. Together, these findings suggest that stimulus-driven processes shape activity in the social brain more powerfully than goal-driven processes in both the visual and the auditory domains. Yet, whereas faces emphasize subcortical emotional and mnemonic mechanisms, voices emphasize cortical mechanisms associated with perception and effortful stimulus evaluation (e.g. via subvocalization). These differences may be due to sensory stimulus properties and highlight the need for a modality-specific perspective when modeling emotion processing in the brain.