English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Accurate 3D Head Pose Estimation under Real-World Driving Conditions: A Pilot Study

MPS-Authors
/persons/resource/persons83829

Breidt,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Project group: Cybernetics Approach to Perception & Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83871

Curio,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource

Link
(Any fulltext)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Breidt, M., Bülthoff, H., & Curio, C. (2016). Accurate 3D Head Pose Estimation under Real-World Driving Conditions: A Pilot Study. In 19th International Conference on Intelligent Transportation Systems (ITSC 2016) (pp. 1261-1268). Piscataway, NJ, USA: IEEE.


Cite as: https://hdl.handle.net/21.11116/0000-0000-7A58-5
Abstract
Reliable and accurate car driver head pose estimation is an important function for the next generation of Advanced Driver Assistance Systems that need to consider the driver state in their analysis. For optimal performance, head pose estimation needs to be non-invasive, calibration-free and accurate for varying driving and illumination conditions. In this pilot study we investigate a 3D head pose estimation system that automatically fits a statistical 3D face model to measurements of a driver's face, acquired with a low-cost depth sensor on challenging real-world data. We evaluate the results of our sensor-independent, driver-adaptive approach to those of a state-of-the-art camera-based 2D face tracking system as well as a non-adaptive 3D model relative to own ground-truth data, and compare to other 3D benchmarks. We find large accuracy benefits of the adaptive 3D approach. Our system shows a median error of 5.99 mm for position and 2.12° for rotation while delivering a full 6-DOF pose with very little degradation from strong illumination changes or out-of-plane rotations of more than 50°. In terms of accuracy, 95 of all our results have a position error of less than 9.50 mm, and a rotation error of less than 4.41°. Compared to the 2D method, this represents a 59.7 reduction of the 95 rotation accuracy threshold, and a 56.1 reduction of the median rotation error.