English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Monocular Reconstruction of Neural Face Reflectance Fields

Mallikarjun B R, Tewari, A., Oh, T.-H., Weyrich, T., Bickel, B., Seidel, H.-P., et al. (2020). Monocular Reconstruction of Neural Face Reflectance Fields. Retrieved from https://arxiv.org/abs/2008.10247.

Item is

Files

show Files
hide Files
:
arXiv:2008.10247.pdf (Preprint), 21MB
Name:
arXiv:2008.10247.pdf
Description:
File downloaded from arXiv at 2021-01-15 09:18 Project page - http://gvv.mpi-inf.mpg.de/projects/FaceReflectanceFields/
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Mallikarjun B R1, Author           
Tewari, Ayush1, Author           
Oh, Tae-Hyun2, Author
Weyrich, Tim2, Author
Bickel, Bernd2, Author
Seidel, Hans-Peter1, Author                 
Pfister, Hanspeter2, Author
Matusik, Wojciech2, Author
Elgharib, Mohamed1, Author           
Theobalt, Christian1, Author                 
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR,Computer Science, Learning, cs.LG
 Abstract: The reflectance field of a face describes the reflectance properties
responsible for complex lighting effects including diffuse, specular,
inter-reflection and self shadowing. Most existing methods for estimating the
face reflectance from a monocular image assume faces to be diffuse with very
few approaches adding a specular component. This still leaves out important
perceptual aspects of reflectance as higher-order global illumination effects
and self-shadowing are not modeled. We present a new neural representation for
face reflectance where we can estimate all components of the reflectance
responsible for the final appearance from a single monocular image. Instead of
modeling each component of the reflectance separately using parametric models,
our neural representation allows us to generate a basis set of faces in a
geometric deformation-invariant space, parameterized by the input light
direction, viewpoint and face geometry. We learn to reconstruct this
reflectance field of a face just from a monocular image, which can be used to
render the face from any viewpoint in any light condition. Our method is
trained on a light-stage training dataset, which captures 300 people
illuminated with 150 light conditions from 8 viewpoints. We show that our
method outperforms existing monocular reflectance reconstruction methods, in
terms of photorealism due to better capturing of physical premitives, such as
sub-surface scattering, specularities, self-shadows and other higher-order
effects.

Details

show
hide
Language(s): eng - English
 Dates: 2020-08-242020
 Publication Status: Published online
 Pages: 10 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2008.10247
BibTex Citekey: Mallikarjun_2008.10247
URI: https://arxiv.org/abs/2008.10247
 Degree: -

Event

show

Legal Case

show

Project information

show hide
Project name : 4DRepLy
Grant ID : 770784
Funding program : Horizon 2020 (H2020)
Funding organization : European Commission (EC)

Source

show