Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  DeepCap: Monocular Human Performance Capture Using Weak Supervision

Habermann, M., Xu, W., Zollhöfer, M., Pons-Moll, G., & Theobalt, C. (2020). DeepCap: Monocular Human Performance Capture Using Weak Supervision. Retrieved from https://arxiv.org/abs/2003.08325.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier
Latex : {DeepCap}: {M}onocular Human Performance Capture Using Weak Supervision

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:2003.08325.pdf (Preprint), 3MB
Name:
arXiv:2003.08325.pdf
Beschreibung:
File downloaded from arXiv at 2021-02-03 07:46
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Habermann, Marc1, Autor           
Xu, Weipeng1, Autor           
Zollhöfer, Michael2, Autor           
Pons-Moll, Gerard3, Autor
Theobalt, Christian1, Autor           
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              
3Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: Human performance capture is a highly important computer vision problem with
many applications in movie production and virtual/augmented reality. Many
previous performance capture approaches either required expensive multi-view
setups or did not recover dense space-time coherent geometry with
frame-to-frame correspondences. We propose a novel deep learning approach for
monocular dense human performance capture. Our method is trained in a weakly
supervised manner based on multi-view supervision completely removing the need
for training data with 3D ground truth annotations. The network architecture is
based on two separate networks that disentangle the task into a pose estimation
and a non-rigid surface deformation step. Extensive qualitative and
quantitative evaluations show that our approach outperforms the state of the
art in terms of quality and robustness.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2020-03-182020
 Publikationsstatus: Online veröffentlicht
 Seiten: 12 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 2003.08325
BibTex Citekey: Habermann2003.08325
URI: https://arxiv.org/abs/2003.08325
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: