English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Manipulating video sequences to determine the components of conversational facial expressions

Cunningham, D., Kleiner, M., Wallraven, C., & Bülthoff, H. (2005). Manipulating video sequences to determine the components of conversational facial expressions. ACM Transactions on Applied Perception, 2(3), 251-269. doi:10.1145/1077399.1077404.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/11858/00-001M-0000-0013-D50F-B Version Permalink: http://hdl.handle.net/21.11116/0000-0003-1D5E-5
Genre: Journal Article

Files

show Files

Locators

show
hide
Description:
-

Creators

show
hide
 Creators:
Cunningham, D1, 2, Author              
Kleiner, M1, 2, 3, Author              
Wallraven, C1, 2, Author              
Bülthoff, HH1, 2, Author              
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              
3Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_2528702              

Content

show
hide
Free keywords: -
 Abstract: Communication plays a central role in everday life. During an average conversation, information is exchanged in a variety of ways, including through facial motion. Here, we employ a custom, model-based image manipulation technique to selectively "freeze" portions of a face in video recordings in order to determine the areas that are sufficient for proper recognition of nine conversational expressions. The results show that most expressions rely primarily on a single facial area to convey meaning, with different expressions using different areas. The results also show that already the combination of rigid head, eye, eyebrow, and mouth motions is sufficient to produce expressions that are as easy to recognize as the original, unmanipulated recordings. Finally, the results show that the manipulation technique introduced few perceptible artifacts into the altered video sequences. This fusion of psychophysics and computer graphics techniques provides not only fundamental insights into human perception and cognitio n, but also yields the basis for a systematic description of what needs to move in order to produce realistic, recognizable conversational facial animations.

Details

show
hide
Language(s):
 Dates: 2005-07
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Method: -
 Identifiers: DOI: 10.1145/1077399.1077404
BibTex Citekey: 3540
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: ACM Transactions on Applied Perception
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: New York, NY : Association for Computing Machinery
Pages: - Volume / Issue: 2 (3) Sequence Number: - Start / End Page: 251 - 269 Identifier: ISSN: 1544-3558
CoNE: https://pure.mpg.de/cone/journals/resource/111056648028200