English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Neural Animation and Reenactment of Human Actor Videos

Liu, L., Xu, W., Zollhöfer, M., Kim, H., Bernard, F., Habermann, M., et al. (2018). Neural Animation and Reenactment of Human Actor Videos. Retrieved from http://arxiv.org/abs/1809.03658.

Item is

Files

show Files
hide Files
:
arXiv:1809.03658.pdf (Preprint), 6MB
Name:
arXiv:1809.03658.pdf
Description:
File downloaded from arXiv at 2018-10-19 08:39
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show
hide
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Liu, Lingjie1, Author           
Xu, Weipeng1, Author           
Zollhöfer, Michael1, Author           
Kim, Hyeongwoo1, Author           
Bernard, Florian1, Author           
Habermann, Marc1, Author           
Wang, Wenping2, Author
Theobalt, Christian1, Author           
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: We propose a method for generating (near) video-realistic animations of real
humans under user control. In contrast to conventional human character
rendering, we do not require the availability of a production-quality
photo-realistic 3D model of the human, but instead rely on a video sequence in
conjunction with a (medium-quality) controllable 3D template model of the
person. With that, our approach significantly reduces production cost compared
to conventional rendering approaches based on production-quality 3D models, and
can also be used to realistically edit existing videos. Technically, this is
achieved by training a neural network that translates simple synthetic images
of a human character into realistic imagery. For training our networks, we
first track the 3D motion of the person in the video using the template model,
and subsequently generate a synthetically rendered version of the video. These
images are then used to train a conditional generative adversarial network that
translates synthetic images of the 3D model into realistic imagery of the
human. We evaluate our method for the reenactment of another person that is
tracked in order to obtain the motion data, and show video results generated
from artist-designed skeleton motion. Our results outperform the
state-of-the-art in learning-based human image synthesis. Project page:
http://gvv.mpi-inf.mpg.de/projects/wxu/HumanReenactment/

Details

show
hide
Language(s): eng - English
 Dates: 2018-09-102018
 Publication Status: Published online
 Pages: 13 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1809.03658
URI: http://arxiv.org/abs/1809.03658
BibTex Citekey: Liu_arXiv1809.03658
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show