English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  On the time course of vocal emotion recognition

Pell, M. D., & Kotz, S. A. (2011). On the time course of vocal emotion recognition. PLoS One, 6(11): e27256. doi:10.1371/journal.pone.0027256.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/11858/00-001M-0000-0013-ACD7-0 Version Permalink: http://hdl.handle.net/11858/00-001M-0000-002B-CCDF-D
Genre: Journal Article

Files

show Files
hide Files
:
Pell_OnTheTime.pdf (Publisher version), 360KB
Name:
Pell_OnTheTime.pdf
Description:
-
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
2011
Copyright Info:
© 2011 Pell, Kotz. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Locators

show

Creators

show
hide
 Creators:
Pell, Marc D.1, Author
Kotz, Sonja A.2, Author              
Affiliations:
1School of Communication Sciences and Disorders, McGill University, Montréal, QC, Canada, ou_persistent22              
2Minerva Research Group Neurocognition of Rhythm in Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_634560              

Content

show
hide
Free keywords: -
 Abstract: How quickly do listeners recognize emotions from a speaker's voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n = 48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the “identification point” for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M = 517 ms), sadness (M = 576 ms), and neutral (M = 510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing.

Details

show
hide
Language(s): eng - English
 Dates: 2011-11-07
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Method: -
 Identifiers: DOI: 10.1371/journal.pone.0027256
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: PLoS One
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: San Francisco, CA : Public Library of Science
Pages: - Volume / Issue: 6 (11) Sequence Number: e27256 Start / End Page: - Identifier: ISSN: 1932-6203
CoNE: https://pure.mpg.de/cone/journals/resource/1000000000277850