English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Neural Re-Rendering of Humans from a Single Image

MPS-Authors
/persons/resource/persons256059

Sarkar,  Kripasindhu
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons129023

Mehta,  Dushyant
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons239654

Golyanik,  Vladislav
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2101.04104.pdf
(Preprint), 9KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Sarkar, K., Mehta, D., Xu, W., Golyanik, V., & Theobalt, C. (2021). Neural Re-Rendering of Humans from a Single Image. Retrieved from https://arxiv.org/abs/2101.04104.


Cite as: http://hdl.handle.net/21.11116/0000-0007-CF05-B
Abstract
Human re-rendering from a single image is a starkly under-constrained problem, and state-of-the-art algorithms often exhibit undesired artefacts, such as over-smoothing, unrealistic distortions of the body parts and garments, or implausible changes of the texture. To address these challenges, we propose a new method for neural re-rendering of a human under a novel user-defined pose and viewpoint, given one input image. Our algorithm represents body pose and shape as a parametric mesh which can be reconstructed from a single image and easily reposed. Instead of a colour-based UV texture map, our approach further employs a learned high-dimensional UV feature map to encode appearance. This rich implicit representation captures detailed appearance variation across poses, viewpoints, person identities and clothing styles better than learned colour texture maps. The body model with the rendered feature maps is fed through a neural image-translation network that creates the final rendered colour image. The above components are combined in an end-to-end-trained neural network architecture that takes as input a source person image, and images of the parametric body model in the source pose and desired target pose. Experimental evaluation demonstrates that our approach produces higher quality single image re-rendering results than existing methods.