Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image

Kim, H., Zollhöfer, M., Tewari, A., Thies, J., Richardt, C., & Theobalt, C. (2017). InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image. Retrieved from http://arxiv.org/abs/1703.10956.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier
Latex : {InverseFaceNet}: {D}eep Single-Shot Inverse Face Rendering From A Single Image

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:1703.10956.pdf (Preprint), 5MB
Name:
arXiv:1703.10956.pdf
Beschreibung:
File downloaded from arXiv at 2017-07-05 12:41
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Kim, Hyeongwoo1, Autor           
Zollhöfer, Michael1, Autor           
Tewari, Ayush1, Autor           
Thies, Justus2, Autor
Richardt, Christian2, Autor
Theobalt, Christian1, Autor                 
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: We introduce InverseFaceNet, a deep convolutional inverse rendering framework for faces that jointly estimates facial pose, shape, expression, reflectance and illumination from a single input image in a single shot. By estimating all these parameters from just a single image, advanced editing possibilities on a single face image, such as appearance editing and relighting, become feasible. Previous learning-based face reconstruction approaches do not jointly recover all dimensions, or are severely limited in terms of visual quality. In contrast, we propose to recover high-quality facial pose, shape, expression, reflectance and illumination using a deep neural network that is trained using a large, synthetically created dataset. Our approach builds on a novel loss function that measures model-space similarity directly in parameter space and significantly improves reconstruction accuracy. In addition, we propose an analysis-by-synthesis breeding approach which iteratively updates the synthetic training corpus based on the distribution of real-world images, and we demonstrate that this strategy outperforms completely synthetically trained networks. Finally, we show high-quality reconstructions and compare our approach to several state-of-the-art approaches.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2017-03-312017
 Publikationsstatus: Online veröffentlicht
 Seiten: 10 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1703.10956
URI: http://arxiv.org/abs/1703.10956
BibTex Citekey: DBLP:journals/corr/KimZTTRT17
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: