English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Contribution of Prosody in Audio-visual Integration to Emotional Perception of Virtual Characters

Volkova, E., Mohler, B., Linkenauger, S., Alexandrova, I., & Bülthoff, H. (2011). Contribution of Prosody in Audio-visual Integration to Emotional Perception of Virtual Characters. Poster presented at 12th International Multisensory Research Forum (IMRF 2011), Fukuoka, Japan.

Item is

Files

show Files

Locators

show
hide
Description:
-

Creators

show
hide
 Creators:
Volkova, E1, 2, Author              
Mohler, B1, 2, Author              
Linkenauger, S1, 2, Author              
Alexandrova, I1, 2, Author              
Bülthoff, HH1, 2, Author              
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: Recent technology provides us with realistic looking virtual characters. Motion capture and elaborate mathematical models supply data for natural looking, controllable facial and bodily animations. With the help of computational linguistics and artificial intelligence, we can automatically assign emotional categories to appropriate stretches of text for a simulation of those social scenarios where verbal communication is important. All this makes virtual characters a valuable tool for creation of versatile stimuli for research on the integration of emotion information from different modalities. We conducted an audio-visual experiment to investigate the differential contributions of emotional speech and facial expressions on emotion identification. We used recorded and synthesized speech as well as dynamic virtual faces, all enhanced for seven emotional categories. The participants were asked to recognize the prevalent emotion of paired faces and audio. Results showed that when the voice was recorded, the vocalized emotion influenced participants’ emotion identification more than the facial expression. However, when the voice was synthesized, facial expression influenced participants’ emotion identification more than vocalized emotion. Additionally, individuals did worse on a identifying either the facial expression or vocalized emotion when the voice was synthesized. Our experimental method can help to determine how to improve synthesized emotional speech.

Details

show
hide
Language(s):
 Dates: 2011-10
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: BibTex Citekey: VolkovaMLAB2011
DOI: 10.1068/ic774
 Degree: -

Event

show
hide
Title: 12th International Multisensory Research Forum (IMRF 2011)
Place of Event: Fukuoka, Japan
Start-/End Date: 2011-10-17 - 2011-10-20

Legal Case

show

Project information

show

Source 1

show
hide
Title: i-Perception
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: 2 (8) Sequence Number: 1-20 Start / End Page: 774 Identifier: -