Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Selective attention modulates early human evoked potentials during emotional face-voice processing

Ho, H. T., Schröger, E., & Kotz, S. A. (2015). Selective attention modulates early human evoked potentials during emotional face-voice processing. Journal of Cognitive Neuroscience, 27(4), 798-818. doi:10.1162/jocn_a_00734.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Zeitschriftenartikel

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Ho, Hao Tam1, Autor           
Schröger, Erich2, Autor
Kotz, Sonja A.3, 4, Autor           
Affiliations:
1Minerva Research Group Neurocognition of Rhythm in Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_634560              
2University of Leipzig, Germany, ou_persistent22              
3University of Manchester, United Kingdom, ou_persistent22              
4Department Neuropsychology, MPI for Human Cognitive and Brain Sciences, Max Planck Society, Leipzig, DE, ou_634551              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Recent findings on multisensory integration suggest that selective attention influences cross-sensory interactions from an early processing stage. Yet, in the field of emotional face–voice integration, the hypothesis prevails that facial and vocal emotional information interacts preattentively. Using ERPs, we investigated the influence of selective attention on the perception of congruent versus incongruent combinations of neutral and angry facial and vocal expressions. Attention was manipulated via four tasks that directed participants to (i) the facial expression, (ii) the vocal expression, (iii) the emotional congruence between the face and the voice, and (iv) the synchrony between lip movement and speech onset. Our results revealed early interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N1 and P2 amplitude by incongruent emotional face–voice combinations. Although audiovisual emotional interactions within the N1 time window were affected by the attentional manipulations, interactions within the P2 modulation showed no such attentional influence. Thus, we propose that the N1 and P2 are functionally dissociated in terms of emotional face–voice processing and discuss evidence in support of the notion that the N1 is associated with cross-sensory prediction, whereas the P2 relates to the derivation of an emotional percept. Essentially, our findings put the integration of facial and vocal emotional expressions into a new perspective—one that regards the integration process as a composite of multiple, possibly independent subprocesses, some of which are susceptible to attentional modulation, whereas others may be influenced by additional factors.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2015-02-272015-04
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: Expertenbegutachtung
 Identifikatoren: DOI: 10.1162/jocn_a_00734
PMID: 25269113
Anderer: Epub 2014
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Journal of Cognitive Neuroscience
Genre der Quelle: Zeitschrift
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: Cambridge, MA : MIT Press Journals
Seiten: - Band / Heft: 27 (4) Artikelnummer: - Start- / Endseite: 798 - 818 Identifikator: ISSN: 0898-929X
CoNE: https://pure.mpg.de/cone/journals/resource/991042752752726