hide
Free keywords:
-
Abstract:
Emotional encoding is central to human communication. Emotional states are communicated by different channels of expression, i.e. we express emotions through body language, facial expression, and tone of voice. The latter has received increasing attention in the literature over the last years. It has been proposed that emotional prosody processing is a highly automatized process in language perception. In particular, it has been suggested that decoding emotional prosody starts very early. For example, previous evidence (Paulmann & Kotz, 2005) revealed that different valences (basic emotions) can be differentiated in an early event-related brain potential (ERP) component, the P200. More specifically, it has been suggested that the P200 is modulated by the valence and intensity of an auditory stimulus, as well as by lexical information. The current experiment set out to further explore the different factors potentially influencing (early) emotional prosody processing. To investigate 'pure' emotional prosody processing, we presented pseudosentences (sentences with no lexical content) spoken in six basic emotions plus a neutral baseline. To further explore the influence of task (internal vs. external appraisal) and arousal we asked participants to rate the arousal level of a) the speaker or b) their own arousal level. Results confirm that different emotional intonations can be differentiated in a) an early component (P200), and b) in a later component (after 500ms). Analogous to the literature on visual emotional processing we assume that the P200 reflects a first emotional encoding of the stimulus including a valence tagging process. This first emotional encoding seems to be particularly influenced by pitch and intensity variations but also by attentional stimulus evaluation processes (internal vs. external appraisal), as different modulations of the P200 component were found for the same acoustical stimuli under different task instructions. The later ERP component (starting after 500 ms) is assumed to reflect a potentially more elaborate analysis of the emotional stimulus as has been suggested in the visual domain (Cuthbert et al., 2000), a process which is influenced by task-type and arousal level of the participant. In sum, the current results confirm that emotional prosody processing is a highly automatized process built up upon several subprocesses including a) early processing of (structural) emotional characteristics of a stimulus as reflected in the P200, and b) a more detailed analysis of the stimulus including an integration of the established context and the valence of the emotional prosodic contour.