English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Relightable Neural Actor with Intrinsic Decomposition and Pose Control

MPS-Authors
/persons/resource/persons282941

Luvizon,  Diogo
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons239654

Golyanik,  Vladislav
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons283728

Kortylewski,  Adam       
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons101676

Habermann,  Marc
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2312.11587.pdf
(Preprint), 14MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Luvizon, D., Golyanik, V., Kortylewski, A., Habermann, M., & Theobalt, C. (2023). Relightable Neural Actor with Intrinsic Decomposition and Pose Control. Retrieved from https://arxiv.org/abs/2312.11587.


Cite as: https://hdl.handle.net/21.11116/0000-0010-0BE0-5
Abstract
Creating a controllable and relightable digital avatar from multi-view video
with fixed illumination is a very challenging problem since humans are highly
articulated, creating pose-dependent appearance effects, and skin as well as
clothing require space-varying BRDF modeling. Existing works on creating
animatible avatars either do not focus on relighting at all, require controlled
illumination setups, or try to recover a relightable avatar from very low cost
setups, i.e. a single RGB video, at the cost of severely limited result
quality, e.g. shadows not even being modeled. To address this, we propose
Relightable Neural Actor, a new video-based method for learning a pose-driven
neural human model that can be relighted, allows appearance editing, and models
pose-dependent effects such as wrinkles and self-shadows. Importantly, for
training, our method solely requires a multi-view recording of the human under
a known, but static lighting condition. To tackle this challenging problem, we
leverage an implicit geometry representation of the actor with a drivable
density field that models pose-dependent deformations and derive a dynamic
mapping between 3D and UV spaces, where normal, visibility, and materials are
effectively encoded. To evaluate our approach in real-world scenarios, we
collect a new dataset with four identities recorded under different light
conditions, indoors and outdoors, providing the first benchmark of its kind for
human relighting, and demonstrating state-of-the-art relighting results for
novel human poses.