Deutsch
 
Benutzerhandbuch Datenschutzhinweis Impressum Kontakt
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz

MPG-Autoren
/persons/resource/persons206546

Tewari,  Ayush
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons136490

Zollhöfer,  Michael
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons127194

Garrido,  Pablo
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons214986

Bernard,  Florian
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons127713

Kim,  Hyeongwoo
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)

arXiv:1712.02859.pdf
(Preprint), 4MB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Tewari, A., Zollhöfer, M., Garrido, P., Bernard, F., Kim, H., Pérez, P., et al. (2017). Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz. Retrieved from http://arxiv.org/abs/1712.02859.


Zitierlink: http://hdl.handle.net/21.11116/0000-0000-615E-A
Zusammenfassung
The reconstruction of dense 3D models of face geometry and appearance from a single image is highly challenging and ill-posed. To constrain the problem, many approaches rely on strong priors, such as parametric face models learned from limited 3D scan data. However, prior models restrict generalization of the true diversity in facial geometry, skin reflectance and illumination. To alleviate this problem, we present the first approach that jointly learns 1) a regressor for face shape, expression, reflectance and illumination on the basis of 2) a concurrently learned parametric face model. Our multi-level face model combines the advantage of 3D Morphable Models for regularization with the out-of-space generalization of a learned corrective space. We train end-to-end on in-the-wild images without dense annotations by fusing a convolutional encoder with a differentiable expert-designed renderer and a self-supervised training loss, both defined at multiple detail levels. Our approach compares favorably to the state-of-the-art in terms of reconstruction quality, better generalizes to real world faces, and runs at over 250 Hz.