English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Face Models from Noisy 3D Cameras

MPS-Authors
/persons/resource/persons83829

Breidt,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83871

Curio,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Breidt, M., Bülthoff, H., & Curio, C. (2010). Face Models from Noisy 3D Cameras. In A. Cani, & M.-P. Sheffer (Eds.), SA '10 ACM SIGGRAPH ASIA 2010 Sketches (pp. 1-2). New York, NY, USA: ACM Press.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-BD2C-B
Abstract
Affordable 3D vision is just about to enter the mass market for consumer products such as video game consoles or TV sets. Having depth information in this context is beneficial for segmentation as well as gaining robustness against illumination effects, both of which are hard problems when dealing with color camera data in typical living room situations. Several techniques compute 3D (or rather 2.5D) depth information from camera data such as realtime stereo, time-of-flight (TOF), or real-time structured light, but all produce noisy depth data at fairly low resolutions. Not surprisingly, most applications are currently limited to basic gesture recognition using the full body. In particular, TOF cameras are a relatively new and promising technology for compact, simple and fast 2.5D depth measurements. Due to the measurement principle of measuring the flight time of infrared light as it bounces off the subject, these devices have comparatively low image resolution (176 x 144 ... 320 x 240 pixels) with a high le vel of noise present in the raw data.