English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Neural Actor: Neural Free-view Synthesis of Human Actors with Pose Control

MPS-Authors
/persons/resource/persons226679

Liu,  Lingjie
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons101676

Habermann,  Marc
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons255979

Rudnev,  Viktor
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons256059

Sarkar,  Kripasindhu
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2106.02019.pdf
(Preprint), 13MB

3478513.3480528.pdf
(Publisher version), 13MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Liu, L., Habermann, M., Rudnev, V., Sarkar, K., Gu, J., & Theobalt, C. (2022). Neural Actor: Neural Free-view Synthesis of Human Actors with Pose Control. ACM Transactions on Graphics, 41(6): 219, pp. 1-16. doi:10.1145/3478513.3480528.


Cite as: https://hdl.handle.net/21.11116/0000-0009-5320-5
Abstract
We propose Neural Actor (NA), a new method for high-quality synthesis of
humans from arbitrary viewpoints and under arbitrary controllable poses. Our
method is built upon recent neural scene representation and rendering works
which learn representations of geometry and appearance from only 2D images.
While existing works demonstrated compelling rendering of static scenes and
playback of dynamic scenes, photo-realistic reconstruction and rendering of
humans with neural implicit methods, in particular under user-controlled novel
poses, is still difficult. To address this problem, we utilize a coarse body
model as the proxy to unwarp the surrounding 3D space into a canonical pose. A
neural radiance field learns pose-dependent geometric deformations and pose-
and view-dependent appearance effects in the canonical space from multi-view
video input. To synthesize novel views of high fidelity dynamic geometry and
appearance, we leverage 2D texture maps defined on the body model as latent
variables for predicting residual deformations and the dynamic appearance.
Experiments demonstrate that our method achieves better quality than the
state-of-the-arts on playback as well as novel pose synthesis, and can even
generalize well to new poses that starkly differ from the training poses.
Furthermore, our method also supports body shape control of the synthesized
results.