English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Selective attention modulates early human evoked potentials during emotional face-voice processing

Ho, H. T., Schröger, E., & Kotz, S. A. (2015). Selective attention modulates early human evoked potentials during emotional face-voice processing. Journal of Cognitive Neuroscience, 27(4), 798-818. doi:10.1162/jocn_a_00734.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Ho, Hao Tam1, Author           
Schröger, Erich2, Author
Kotz, Sonja A.3, 4, Author           
Affiliations:
1Minerva Research Group Neurocognition of Rhythm in Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_634560              
2University of Leipzig, Germany, ou_persistent22              
3University of Manchester, United Kingdom, ou_persistent22              
4Department Neuropsychology, MPI for Human Cognitive and Brain Sciences, Max Planck Society, Leipzig, DE, ou_634551              

Content

show
hide
Free keywords: -
 Abstract: Recent findings on multisensory integration suggest that selective attention influences cross-sensory interactions from an early processing stage. Yet, in the field of emotional face–voice integration, the hypothesis prevails that facial and vocal emotional information interacts preattentively. Using ERPs, we investigated the influence of selective attention on the perception of congruent versus incongruent combinations of neutral and angry facial and vocal expressions. Attention was manipulated via four tasks that directed participants to (i) the facial expression, (ii) the vocal expression, (iii) the emotional congruence between the face and the voice, and (iv) the synchrony between lip movement and speech onset. Our results revealed early interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N1 and P2 amplitude by incongruent emotional face–voice combinations. Although audiovisual emotional interactions within the N1 time window were affected by the attentional manipulations, interactions within the P2 modulation showed no such attentional influence. Thus, we propose that the N1 and P2 are functionally dissociated in terms of emotional face–voice processing and discuss evidence in support of the notion that the N1 is associated with cross-sensory prediction, whereas the P2 relates to the derivation of an emotional percept. Essentially, our findings put the integration of facial and vocal emotional expressions into a new perspective—one that regards the integration process as a composite of multiple, possibly independent subprocesses, some of which are susceptible to attentional modulation, whereas others may be influenced by additional factors.

Details

show
hide
Language(s): eng - English
 Dates: 2015-02-272015-04
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1162/jocn_a_00734
PMID: 25269113
Other: Epub 2014
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Journal of Cognitive Neuroscience
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Cambridge, MA : MIT Press Journals
Pages: - Volume / Issue: 27 (4) Sequence Number: - Start / End Page: 798 - 818 Identifier: ISSN: 0898-929X
CoNE: https://pure.mpg.de/cone/journals/resource/991042752752726