Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

Monocular Reconstruction of Neural Face Reflectance Fields

MPG-Autoren
/persons/resource/persons239545

Mallikarjun B R, 
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons206546

Tewari,  Ayush
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons229949

Elgharib,  Mohamed
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

arXiv:2008.10247.pdf
(Preprint), 21MB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Mallikarjun B R, Tewari, A., Oh, T.-H., Weyrich, T., Bickel, B., Seidel, H.-P., et al. (2020). Monocular Reconstruction of Neural Face Reflectance Fields. Retrieved from https://arxiv.org/abs/2008.10247.


Zitierlink: https://hdl.handle.net/21.11116/0000-0007-B110-E
Zusammenfassung
The reflectance field of a face describes the reflectance properties
responsible for complex lighting effects including diffuse, specular,
inter-reflection and self shadowing. Most existing methods for estimating the
face reflectance from a monocular image assume faces to be diffuse with very
few approaches adding a specular component. This still leaves out important
perceptual aspects of reflectance as higher-order global illumination effects
and self-shadowing are not modeled. We present a new neural representation for
face reflectance where we can estimate all components of the reflectance
responsible for the final appearance from a single monocular image. Instead of
modeling each component of the reflectance separately using parametric models,
our neural representation allows us to generate a basis set of faces in a
geometric deformation-invariant space, parameterized by the input light
direction, viewpoint and face geometry. We learn to reconstruct this
reflectance field of a face just from a monocular image, which can be used to
render the face from any viewpoint in any light condition. Our method is
trained on a light-stage training dataset, which captures 300 people
illuminated with 150 light conditions from 8 viewpoints. We show that our
method outperforms existing monocular reflectance reconstruction methods, in
terms of photorealism due to better capturing of physical premitives, such as
sub-surface scattering, specularities, self-shadows and other higher-order
effects.