日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

会議論文

Robust Semantic Analysis by Synthesis of 3D Facial Motion

MPS-Authors
/persons/resource/persons83829

Breidt,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83871

Curio,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Breidt, M., Bülthoff, H., & Curio, C. (2011). Robust Semantic Analysis by Synthesis of 3D Facial Motion. In Ninth IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011) (pp. 713-719). Piscataway, NJ, USA: IEEE.


引用: https://hdl.handle.net/11858/00-001M-0000-0013-BC6C-3
要旨
Rich face models already have a large impact on the fields of computer vision, perception research, as well as computer graphics and animation. Attributes such as descriptiveness, semantics, and intuitive control are desirable properties but hard to achieve. Towards the goal of building such high-quality face models, we present a 3D model-based analysis-by-synthesis approach that is able to parameterize 3D facial surfaces, and that can estimate the state of semantically meaningful components, even from noisy depth data such as that produced by Time-of-Flight (ToF) cameras or devices such as Microsoft Kinect. At the core, we present a specialized 3D morphable model (3DMM) for facial expression analysis and synthesis. In contrast to many other models, our model is derived from a large corpus of localized facial deformations that were recorded as 3D scans from multiple identities. This allows us to analyze unstructured dynamic 3D scan data using a modified Iterative Closest Point model fitting process, followed by a constrained Action Unit model regression, resulting in semantically meaningful facial deformation time courses. We demonstrate the generative capabilities of our 3DMMs for facial surface reconstruction on high and low quality surface data from a ToF camera. The analysis of simultaneous recordings of facial motion using passive stereo and noisy Time-of-Flight camera shows good agreement of the recovered facial semantics.