Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction

Tewari, A., Zollhöfer, M., Kim, H., Garrido, P., Bernard, F., Pérez, P., et al. (2017). MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. Retrieved from http://arxiv.org/abs/1703.10580.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier
Latex : {MoFA}: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:1703.10580.pdf (Preprint), 10MB
Name:
arXiv:1703.10580.pdf
Beschreibung:
File downloaded from arXiv at 2017-07-05 13:23
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Tewari, Ayush1, Autor           
Zollhöfer, Michael1, Autor           
Kim, Hyeongwoo1, Autor           
Garrido, Pablo1, Autor           
Bernard, Florian2, Autor
Pérez, Patrick2, Autor
Theobalt, Christian1, Autor           
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: In this work we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image. To this end, we combine a convolutional encoder network with an expert-designed generative model that serves as decoder. The core innovation is our new differentiable parametric decoder that encapsulates image formation analytically based on a generative model. Our decoder takes as input a code vector with exactly defined semantic meaning that encodes detailed face pose, shape, expression, skin reflectance and scene illumination. Due to this new way of combining CNN-based with model-based face reconstruction, the CNN-based encoder learns to extract semantically meaningful parameters from a single monocular input image. For the first time, a CNN encoder and an expert-designed generative model can be trained end-to-end in an unsupervised manner, which renders training on very large (unlabeled) real world data feasible. The obtained reconstructions compare favorably to current state-of-the-art approaches in terms of quality and richness of representation.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2017-03-302017
 Publikationsstatus: Online veröffentlicht
 Seiten: 10 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1703.10580
URI: http://arxiv.org/abs/1703.10580
BibTex Citekey: DBLP:journals/corr/TewariZK0BPT17
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: