Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  On the time course of vocal emotion recognition

Pell, M. D., & Kotz, S. A. (2011). On the time course of vocal emotion recognition. PLoS One, 6(11): e27256. doi:10.1371/journal.pone.0027256.

Item is

Dateien

einblenden: Dateien
ausblenden: Dateien
:
Pell_OnTheTime.pdf (Verlagsversion), 360KB
Name:
Pell_OnTheTime.pdf
Beschreibung:
-
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
2011
Copyright Info:
© 2011 Pell, Kotz. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Pell, Marc D.1, Autor
Kotz, Sonja A.2, Autor           
Affiliations:
1School of Communication Sciences and Disorders, McGill University, Montréal, QC, Canada, ou_persistent22              
2Minerva Research Group Neurocognition of Rhythm in Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_634560              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: How quickly do listeners recognize emotions from a speaker's voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n = 48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the “identification point” for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M = 517 ms), sadness (M = 576 ms), and neutral (M = 510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2011-11-07
 Publikationsstatus: Online veröffentlicht
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: DOI: 10.1371/journal.pone.0027256
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: PLoS One
Genre der Quelle: Zeitschrift
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: San Francisco, CA : Public Library of Science
Seiten: - Band / Heft: 6 (11) Artikelnummer: e27256 Start- / Endseite: - Identifikator: ISSN: 1932-6203
CoNE: https://pure.mpg.de/cone/journals/resource/1000000000277850