English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Neural Re-Rendering of Humans from a Single Image

Sarkar, K., Mehta, D., Xu, W., Golyanik, V., & Theobalt, C. (2021). Neural Re-Rendering of Humans from a Single Image. Retrieved from https://arxiv.org/abs/2101.04104.

Item is

Files

show Files
hide Files
:
arXiv:2101.04104.pdf (Preprint), 9KB
Name:
arXiv:2101.04104.pdf
Description:
File downloaded from arXiv at 2021-01-22 10:05 Published in ECCV 2020
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/xhtml+xml / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Sarkar, Kripasindhu1, Author           
Mehta, Dushyant2, Author           
Xu, Weipeng3, Author           
Golyanik, Vladislav1, Author           
Theobalt, Christian1, Author           
Affiliations:
1Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society, ou_3311330              
2Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
3External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: Human re-rendering from a single image is a starkly under-constrained
problem, and state-of-the-art algorithms often exhibit undesired artefacts,
such as over-smoothing, unrealistic distortions of the body parts and garments,
or implausible changes of the texture. To address these challenges, we propose
a new method for neural re-rendering of a human under a novel user-defined pose
and viewpoint, given one input image. Our algorithm represents body pose and
shape as a parametric mesh which can be reconstructed from a single image and
easily reposed. Instead of a colour-based UV texture map, our approach further
employs a learned high-dimensional UV feature map to encode appearance. This
rich implicit representation captures detailed appearance variation across
poses, viewpoints, person identities and clothing styles better than learned
colour texture maps. The body model with the rendered feature maps is fed
through a neural image-translation network that creates the final rendered
colour image. The above components are combined in an end-to-end-trained neural
network architecture that takes as input a source person image, and images of
the parametric body model in the source pose and desired target pose.
Experimental evaluation demonstrates that our approach produces higher quality
single image re-rendering results than existing methods.

Details

show
hide
Language(s): eng - English
 Dates: 2021-01-112021
 Publication Status: Published online
 Pages: 22 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2101.04104
URI: https://arxiv.org/abs/2101.04104
BibTex Citekey: Sarkar_arXiv2101.04104
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show