English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Robust Semantic Analysis by Synthesis of 3D Facial Motion

Breidt, M., Bülthoff, H., & Curio, C. (2011). Robust Semantic Analysis by Synthesis of 3D Facial Motion. In Ninth IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011) (pp. 713-719). Piscataway, NJ, USA: IEEE.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/11858/00-001M-0000-0013-BC6C-3 Version Permalink: http://hdl.handle.net/21.11116/0000-0003-1891-E
Genre: Conference Paper

Files

show Files

Locators

show
hide
Description:
-

Creators

show
hide
 Creators:
Breidt, M1, 2, 3, Author              
Bülthoff, HH1, 2, Author              
Curio, C1, 2, 3, Author              
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              
3Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_2528702              

Content

show
hide
Free keywords: -
 Abstract: Rich face models already have a large impact on the fields of computer vision, perception research, as well as computer graphics and animation. Attributes such as descriptiveness, semantics, and intuitive control are desirable properties but hard to achieve. Towards the goal of building such high-quality face models, we present a 3D model-based analysis-by-synthesis approach that is able to parameterize 3D facial surfaces, and that can estimate the state of semantically meaningful components, even from noisy depth data such as that produced by Time-of-Flight (ToF) cameras or devices such as Microsoft Kinect. At the core, we present a specialized 3D morphable model (3DMM) for facial expression analysis and synthesis. In contrast to many other models, our model is derived from a large corpus of localized facial deformations that were recorded as 3D scans from multiple identities. This allows us to analyze unstructured dynamic 3D scan data using a modified Iterative Closest Point model fitting process, followed by a constrained Action Unit model regression, resulting in semantically meaningful facial deformation time courses. We demonstrate the generative capabilities of our 3DMMs for facial surface reconstruction on high and low quality surface data from a ToF camera. The analysis of simultaneous recordings of facial motion using passive stereo and noisy Time-of-Flight camera shows good agreement of the recovered facial semantics.

Details

show
hide
Language(s):
 Dates: 2011-05
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Method: -
 Identifiers: DOI: 10.1109/FG.2011.5771336
BibTex Citekey: 7016
 Degree: -

Event

show
hide
Title: Ninth IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG 2011)
Place of Event: Santa Barbara, CA, USA
Start-/End Date: 2011-03-21 - 2011-03-25

Legal Case

show

Project information

show

Source 1

show
hide
Title: Ninth IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: Piscataway, NJ, USA : IEEE
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 713 - 719 Identifier: ISBN: 978-1-4244-9140-7