Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Neural Animation and Reenactment of Human Actor Videos

Liu, L., Xu, W., Zollhöfer, M., Kim, H., Bernard, F., Habermann, M., et al. (2018). Neural Animation and Reenactment of Human Actor Videos. Retrieved from http://arxiv.org/abs/1809.03658.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:1809.03658.pdf (Preprint), 6MB
Name:
arXiv:1809.03658.pdf
Beschreibung:
File downloaded from arXiv at 2018-10-19 08:39
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:
ausblenden:
externe Referenz:
http://gvv.mpi-inf.mpg.de/projects/wxu/HumanReenactment/ (Ergänzendes Material)
Beschreibung:
-
OA-Status:

Urheber

einblenden:
ausblenden:
 Urheber:
Liu, Lingjie1, Autor           
Xu, Weipeng1, Autor           
Zollhöfer, Michael1, Autor           
Kim, Hyeongwoo1, Autor           
Bernard, Florian1, Autor           
Habermann, Marc1, Autor           
Wang, Wenping2, Autor
Theobalt, Christian1, Autor           
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: We propose a method for generating (near) video-realistic animations of real
humans under user control. In contrast to conventional human character
rendering, we do not require the availability of a production-quality
photo-realistic 3D model of the human, but instead rely on a video sequence in
conjunction with a (medium-quality) controllable 3D template model of the
person. With that, our approach significantly reduces production cost compared
to conventional rendering approaches based on production-quality 3D models, and
can also be used to realistically edit existing videos. Technically, this is
achieved by training a neural network that translates simple synthetic images
of a human character into realistic imagery. For training our networks, we
first track the 3D motion of the person in the video using the template model,
and subsequently generate a synthetically rendered version of the video. These
images are then used to train a conditional generative adversarial network that
translates synthetic images of the 3D model into realistic imagery of the
human. We evaluate our method for the reenactment of another person that is
tracked in order to obtain the motion data, and show video results generated
from artist-designed skeleton motion. Our results outperform the
state-of-the-art in learning-based human image synthesis. Project page:
http://gvv.mpi-inf.mpg.de/projects/wxu/HumanReenactment/

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2018-09-102018
 Publikationsstatus: Online veröffentlicht
 Seiten: 13 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1809.03658
URI: http://arxiv.org/abs/1809.03658
BibTex Citekey: Liu_arXiv1809.03658
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: