English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Accentuation and emotions - two different systems?

MPS-Authors
/persons/resource/persons19528

Alter,  Kai
MPI of Cognitive Neuroscience (Leipzig, -2003), The Prior Institutes, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

/persons/resource/persons19791

Kotz,  Sonja A.
MPI of Cognitive Neuroscience (Leipzig, -2003), The Prior Institutes, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

/persons/resource/persons19971

Schirmer,  Annett
MPI of Cognitive Neuroscience (Leipzig, -2003), The Prior Institutes, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

/persons/resource/persons19643

Friederici,  Angela D.
MPI of Cognitive Neuroscience (Leipzig, -2003), The Prior Institutes, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

External Ressource
No external resources are shared
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Alter, K., Rank, E., Kotz, S. A., Toepel, U., Besson, M., Schirmer, A., et al. (2000). Accentuation and emotions - two different systems? In Speech and Emotion (pp. 138-142).


Cite as: http://hdl.handle.net/21.11116/0000-0003-4D9D-7
Abstract
Current investigations point to a relationship between syntax and prosody. However, prosody can also be linked to emotional markers of an utterance. We tested the later option with event-related brain potentials (ERPs) showing more electrophysiological negativity for utterances with neutral emotional states than for happiness and cold anger. In addition, the material was analyzed using an estimation of the harmonics-to-noise ratio (HNR), a measure for spectral flatness as well as the maximum prediction gain for a speech production model computed by the mutual information (MI) function. The results indicate that the HNR estimation correlates with accentuation, depending on the position of the vowel, whereas a low maximum prediction gain indicates positive or negative emotional state of the speaker in comparison to the neutral state. Comparing ERPs and the acoustic data, a relationship between the maximum prediction gain and the perception of emotions can be established.