English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient Points

MPS-Authors
/persons/resource/persons85110

Tzionas,  Dimitris
Dept. Perceiving Systems, Max Planck Institute for Intelligent Systems, Max Planck Society;

/persons/resource/persons85108

Srikantha,  Abhilash
Dept. Perceiving Systems, Max Planck Institute for Intelligent Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Tzionas, D., Srikantha, A., Aponte, P., & Gall, J. (2014). Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient Points. In Pattern Recognition. 36th German Conference, GCPR 2014. Proceedings (pp. 277-289). Cham et al.: Springer International Publishing.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0024-E35D-9
Abstract
Hand motion capture has been an active research topic in recent years, following the success of full-body pose tracking. Despite similarities, hand tracking proves to be more challenging, characterized by a higher dimensionality, severe occlusions and self-similarity between fingers. For this reason, most approaches rely on strong assumptions, like hands in isolation or expensive multi-camera systems, that limit the practical use. In this work, we propose a framework for hand tracking that can capture the motion of two interacting hands using only a single, inexpensive RGB-D camera. Our approach combines a generative model with collision detection and discriminatively learned salient points. We quantitatively evaluate our approach on 14 new sequences with challenging interactions.