Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Neural Re-Rendering of Humans from a Single Image

Sarkar, K., Mehta, D., Xu, W., Golyanik, V., & Theobalt, C. (2021). Neural Re-Rendering of Humans from a Single Image. Retrieved from https://arxiv.org/abs/2101.04104.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:2101.04104.pdf (Preprint), 9KB
Name:
arXiv:2101.04104.pdf
Beschreibung:
File downloaded from arXiv at 2021-01-22 10:05 Published in ECCV 2020
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/xhtml+xml / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Sarkar, Kripasindhu1, Autor           
Mehta, Dushyant2, Autor           
Xu, Weipeng3, Autor           
Golyanik, Vladislav1, Autor           
Theobalt, Christian1, Autor                 
Affiliations:
1Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society, ou_3311330              
2Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
3External Organizations, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: Human re-rendering from a single image is a starkly under-constrained
problem, and state-of-the-art algorithms often exhibit undesired artefacts,
such as over-smoothing, unrealistic distortions of the body parts and garments,
or implausible changes of the texture. To address these challenges, we propose
a new method for neural re-rendering of a human under a novel user-defined pose
and viewpoint, given one input image. Our algorithm represents body pose and
shape as a parametric mesh which can be reconstructed from a single image and
easily reposed. Instead of a colour-based UV texture map, our approach further
employs a learned high-dimensional UV feature map to encode appearance. This
rich implicit representation captures detailed appearance variation across
poses, viewpoints, person identities and clothing styles better than learned
colour texture maps. The body model with the rendered feature maps is fed
through a neural image-translation network that creates the final rendered
colour image. The above components are combined in an end-to-end-trained neural
network architecture that takes as input a source person image, and images of
the parametric body model in the source pose and desired target pose.
Experimental evaluation demonstrates that our approach produces higher quality
single image re-rendering results than existing methods.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2021-01-112021
 Publikationsstatus: Online veröffentlicht
 Seiten: 22 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 2101.04104
URI: https://arxiv.org/abs/2101.04104
BibTex Citekey: Sarkar_arXiv2101.04104
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: