English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

A Deeper Look into DeepCap

MPS-Authors
/persons/resource/persons101676

Habermann,  Marc
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons206382

Xu,  Weipeng
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons118756

Pons-Moll,  Gerard       
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2111.10563.pdf
(Preprint), 14MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Habermann, M., Xu, W., Zollhöfer, M., Pons-Moll, G., & Theobalt, C. (2023). A Deeper Look into DeepCap. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4), 4009-4002. doi:10.1109/TPAMI.2021.3093553.


Cite as: https://hdl.handle.net/21.11116/0000-0009-8C33-0
Abstract
Human performance capture is a highly important computer vision problem with
many applications in movie production and virtual/augmented reality. Many
previous performance capture approaches either required expensive multi-view
setups or did not recover dense space-time coherent geometry with
frame-to-frame correspondences. We propose a novel deep learning approach for
monocular dense human performance capture. Our method is trained in a weakly
supervised manner based on multi-view supervision completely removing the need
for training data with 3D ground truth annotations. The network architecture is
based on two separate networks that disentangle the task into a pose estimation
and a non-rigid surface deformation step. Extensive qualitative and
quantitative evaluations show that our approach outperforms the state of the
art in terms of quality and robustness. This work is an extended version of
DeepCap where we provide more detailed explanations, comparisons and results as
well as applications.