Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

Neural Re-Rendering of Humans from a Single Image

MPG-Autoren
/persons/resource/persons256059

Sarkar,  Kripasindhu
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons129023

Mehta,  Dushyant
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons239654

Golyanik,  Vladislav
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

arXiv:2101.04104.pdf
(Preprint), 9KB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Sarkar, K., Mehta, D., Xu, W., Golyanik, V., & Theobalt, C. (2021). Neural Re-Rendering of Humans from a Single Image. Retrieved from https://arxiv.org/abs/2101.04104.


Zitierlink: https://hdl.handle.net/21.11116/0000-0007-CF05-B
Zusammenfassung
Human re-rendering from a single image is a starkly under-constrained
problem, and state-of-the-art algorithms often exhibit undesired artefacts,
such as over-smoothing, unrealistic distortions of the body parts and garments,
or implausible changes of the texture. To address these challenges, we propose
a new method for neural re-rendering of a human under a novel user-defined pose
and viewpoint, given one input image. Our algorithm represents body pose and
shape as a parametric mesh which can be reconstructed from a single image and
easily reposed. Instead of a colour-based UV texture map, our approach further
employs a learned high-dimensional UV feature map to encode appearance. This
rich implicit representation captures detailed appearance variation across
poses, viewpoints, person identities and clothing styles better than learned
colour texture maps. The body model with the rendered feature maps is fed
through a neural image-translation network that creates the final rendered
colour image. The above components are combined in an end-to-end-trained neural
network architecture that takes as input a source person image, and images of
the parametric body model in the source pose and desired target pose.
Experimental evaluation demonstrates that our approach produces higher quality
single image re-rendering results than existing methods.