Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  High-Fidelity Neural Human Motion Transfer from Monocular Video

Kappel, M., Golyanik, V., Elgharib, M., Henningson, J.-O., Seidel, H.-P., Castillo, S., et al. (2020). High-Fidelity Neural Human Motion Transfer from Monocular Video. Retrieved from https://arxiv.org/abs/2012.10974.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:2012.10974.pdf (Preprint), 25MB
Name:
arXiv:2012.10974.pdf
Beschreibung:
File downloaded from arXiv at 2021-01-19 13:20
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Kappel, Moritz1, Autor
Golyanik, Vladislav2, Autor           
Elgharib, Mohamed2, Autor           
Henningson, Jann-Ole1, Autor
Seidel, Hans-Peter2, Autor                 
Castillo, Susana1, Autor
Theobalt, Christian2, Autor                 
Magnor, Marcus A.1, Autor           
Affiliations:
1External Organizations, ou_persistent22              
2Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR,Computer Science, Learning, cs.LG
 Zusammenfassung: Video-based human motion transfer creates video animations of humans
following a source motion. Current methods show remarkable results for
tightly-clad subjects. However, the lack of temporally consistent handling of
plausible clothing dynamics, including fine and high-frequency details,
significantly limits the attainable visual quality. We address these
limitations for the first time in the literature and present a new framework
which performs high-fidelity and temporally-consistent human motion transfer
with natural pose-dependent non-rigid deformations, for several types of loose
garments. In contrast to the previous techniques, we perform image generation
in three subsequent stages, synthesizing human shape, structure, and
appearance. Given a monocular RGB video of an actor, we train a stack of
recurrent deep neural networks that generate these intermediate representations
from 2D poses and their temporal derivatives. Splitting the difficult motion
transfer problem into subtasks that are aware of the temporal motion context
helps us to synthesize results with plausible dynamics and pose-dependent
detail. It also allows artistic control of results by manipulation of
individual framework stages. In the experimental results, we significantly
outperform the state-of-the-art in terms of video realism. Our code and data
will be made publicly available.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2020-12-202020
 Publikationsstatus: Online veröffentlicht
 Seiten: 14 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 2012.10974
BibTex Citekey: Kappel_arXiv2012.10974
URI: https://arxiv.org/abs/2012.10974
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: