English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Monocular Reconstruction of Neural Face Reflectance Fields

MPS-Authors
/persons/resource/persons239545

Mallikarjun B R, 
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons206546

Tewari,  Ayush
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons229949

Elgharib,  Mohamed
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2008.10247.pdf
(Preprint), 21MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Mallikarjun B R, Tewari, A., Oh, T.-H., Weyrich, T., Bickel, B., Seidel, H.-P., et al. (2020). Monocular Reconstruction of Neural Face Reflectance Fields. Retrieved from https://arxiv.org/abs/2008.10247.


Cite as: https://hdl.handle.net/21.11116/0000-0007-B110-E
Abstract
The reflectance field of a face describes the reflectance properties
responsible for complex lighting effects including diffuse, specular,
inter-reflection and self shadowing. Most existing methods for estimating the
face reflectance from a monocular image assume faces to be diffuse with very
few approaches adding a specular component. This still leaves out important
perceptual aspects of reflectance as higher-order global illumination effects
and self-shadowing are not modeled. We present a new neural representation for
face reflectance where we can estimate all components of the reflectance
responsible for the final appearance from a single monocular image. Instead of
modeling each component of the reflectance separately using parametric models,
our neural representation allows us to generate a basis set of faces in a
geometric deformation-invariant space, parameterized by the input light
direction, viewpoint and face geometry. We learn to reconstruct this
reflectance field of a face just from a monocular image, which can be used to
render the face from any viewpoint in any light condition. Our method is
trained on a light-stage training dataset, which captures 300 people
illuminated with 150 light conditions from 8 viewpoints. We show that our
method outperforms existing monocular reflectance reconstruction methods, in
terms of photorealism due to better capturing of physical premitives, such as
sub-surface scattering, specularities, self-shadows and other higher-order
effects.