English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data

MPS-Authors
/persons/resource/persons101676

Habermann,  Marc
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons206382

Xu,  Weipeng
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons239614

Habibie,  Ikhsanul
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2003.09572.pdf
(Preprint), 9MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Zhou, Y., Habermann, M., Xu, W., Habibie, I., Theobalt, C., & Xu, F. (2020). Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data. Retrieved from https://arxiv.org/abs/2003.09572.


Cite as: https://hdl.handle.net/21.11116/0000-0007-E0D3-D
Abstract
We present a novel method for monocular hand shape and pose estimation at
unprecedented runtime performance of 100fps and at state-of-the-art accuracy.
This is enabled by a new learning based architecture designed such that it can
make use of all the sources of available hand training data: image data with
either 2D or 3D annotations, as well as stand-alone 3D animations without
corresponding image data. It features a 3D hand joint detection module and an
inverse kinematics module which regresses not only 3D joint positions but also
maps them to joint rotations in a single feed-forward pass. This output makes
the method more directly usable for applications in computer vision and
graphics compared to only regressing 3D joint positions. We demonstrate that
our architectural design leads to a significant quantitative and qualitative
improvement over the state of the art on several challenging benchmarks. Our
model is publicly available for future research.