English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Accurate and marker-less head tracking using depth sensors

MPS-Authors
/persons/resource/persons83829

Breidt,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83871

Curio,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Breidt, M., Bülthoff, H., & Curio, C. (2013). Accurate and marker-less head tracking using depth sensors. In S. Czanner W. Tang (Ed.), Theory and Practice of Computer Graphics 2013 (pp. 41-48). Aire-la-Ville, Switzerland: Eurographics Association.


Cite as: http://hdl.handle.net/11858/00-001M-0000-001A-1347-6
Abstract
Parameterized, high-fidelity 3D surface models can not only be used for rendering animations in the context of Computer Graphics (CG), but have become increasingly popular for analyzing data, and thus making these accessible to CG systems in an Analysis-by-Synthesis loop. In this paper, we utilize this concept for accurate head tracking by fitting a statistical 3D model to marker-less face data acquired with a low-cost depth sensor, and demonstrate its robustness in a challenging car driving scenario. We compute 3D head position and orientation with a mesh-based 3D shape matching algorithm that is independent of person identity and sensor type, and at the same time robust to facial expressions, speech, partial occlusion and illumination changes. Different strategies for obtaining the 3D face model are evaluated, trading off computational complexity and accuracy. Ground truth data for head pose are obtained from simultaneous marker-based tracking. Average tracking errors are below 6mm for head position and below 2.5 for head orientation, demonstrating the system's potential to be used as part of a non-intrusive head tracking system for use in Augmented Reality or driver assistance systems.