Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

DeepCap: Monocular Human Performance Capture Using Weak Supervision

MPG-Autoren
/persons/resource/persons101676

Habermann,  Marc
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons206382

Xu,  Weipeng
Computer Graphics, MPI for Informatics, Max Planck Society;

Pons-Moll,  Gerard
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

arXiv:2003.08325.pdf
(Preprint), 3MB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Habermann, M., Xu, W., Zollhöfer, M., Pons-Moll, G., & Theobalt, C. (2020). DeepCap: Monocular Human Performance Capture Using Weak Supervision. Retrieved from https://arxiv.org/abs/2003.08325.


Zitierlink: https://hdl.handle.net/21.11116/0000-0007-E010-9
Zusammenfassung
Human performance capture is a highly important computer vision problem with
many applications in movie production and virtual/augmented reality. Many
previous performance capture approaches either required expensive multi-view
setups or did not recover dense space-time coherent geometry with
frame-to-frame correspondences. We propose a novel deep learning approach for
monocular dense human performance capture. Our method is trained in a weakly
supervised manner based on multi-view supervision completely removing the need
for training data with 3D ground truth annotations. The network architecture is
based on two separate networks that disentangle the task into a pose estimation
and a non-rigid surface deformation step. Extensive qualitative and
quantitative evaluations show that our approach outperforms the state of the
art in terms of quality and robustness.