English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

The voice of emotion across species: How do human listeners recognize animals' affective states?

MPS-Authors
/persons/resource/persons19696

Hasting,  Anna S.
Department Neuropsychology, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
Clinic for Cognitive Neurology, University of Leipzig, Germany;

/persons/resource/persons19791

Kotz,  Sonja A.
Department Neuropsychology, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
School of Psychological Sciences, University of Manchester, United Kingdom;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Scheumann_VoiceofEmotion.pdf
(Publisher version), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Scheumann, M., Hasting, A. S., Kotz, S. A., & Zimmermann, E. (2014). The voice of emotion across species: How do human listeners recognize animals' affective states? PLoS One, 9(3): e91192. doi:10.1371/journal.pone.0091192.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0024-4F87-C
Abstract
Voice-induced cross-taxa emotional recognition is the ability to understand the emotional state of another species based on its voice. In the past, induced affective states, experience-dependent higher cognitive processes or cross-taxa universal acoustic coding and processing mechanisms have been discussed to underlie this ability in humans. The present study sets out to distinguish the influence of familiarity and phylogeny on voice-induced cross-taxa emotional perception in humans. For the first time, two perspectives are taken into account: the self- (i.e. emotional valence induced in the listener) versus the others-perspective (i.e. correct recognition of the emotional valence of the recording context). Twenty-eight male participants listened to 192 vocalizations of four different species (human infant, dog, chimpanzee and tree shrew). Stimuli were recorded either in an agonistic (negative emotional valence) or affiliative (positive emotional valence) context. Participants rated the emotional valence of the stimuli adopting self- and others-perspective by using a 5-point version of the Self-Assessment Manikin (SAM). Familiarity was assessed based on subjective rating, objective labelling of the respective stimuli and interaction time with the respective species. Participants reliably recognized the emotional valence of human voices, whereas the results for animal voices were mixed. The correct classification of animal voices depended on the listener's familiarity with the species and the call type/recording context, whereas there was less influence of induced emotional states and phylogeny. Our results provide first evidence that explicit voice-induced cross-taxa emotional recognition in humans is shaped more by experience-dependent cognitive mechanisms than by induced affective states or cross-taxa universal acoustic coding and processing mechanisms.