日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

ポスター

Integration of Visual and Auditory Stimuli in the Perception of Emotional Expression in Virtual Characters

MPS-Authors
/persons/resource/persons84285

Volkova,  E
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84060

Linkenauger,  S
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83780

Alexandrova,  I
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84088

Mohler,  B
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Volkova, E., Linkenauger, S., Alexandrova, I., Bülthoff, H., & Mohler, B. (2011). Integration of Visual and Auditory Stimuli in the Perception of Emotional Expression in Virtual Characters. Poster presented at 34th European Conference on Visual Perception (ECVP 2011), Toulouse, France.


引用: https://hdl.handle.net/11858/00-001M-0000-0013-BA76-C
要旨
Virtual characters are a potentially valuable tool for creating stimuli for research investigating the perception of emotion. We conducted an audio-visual experiment to investigate the effectiveness of our stimuli to convey the intended emotion. We used dynamic virtual faces in addition to pre-recorded (Burkhardt et al, 2005, Interspeech'2005, 1517–1520) and synthesized speech to create audio-visual stimuli which conveyed all possible combinations of stimuli. Each voice and face stimuli aimed to express one of seven different emotional categories. The participants made judgments of the prevalent emotion. For the pre-recorded voice, the vocalized emotion influenced participants’ emotion judgment more than the facial expression. However, for the synthesized voice, facial expression influenced participants’ emotion judgment more than vocalized emotion. While participants rather accurately labeled (>76) the stimuli when face and voice emotion were the same, they performed worse overall on correctly identifying the stimuli when the voice was synthesized. We further analyzed the difference between the emotional categories in each stimulus and found that valence distance in the emotion of the face and voice significantly impacted recognition of the emotion judgment for both natural and synthesized voices. This experimental design provides a method to improve virtual character emotional expression.