Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

Neural Animation and Reenactment of Human Actor Videos

MPG-Autoren
/persons/resource/persons226679

Liu,  Lingjie
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons206382

Xu,  Weipeng
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons136490

Zollhöfer,  Michael
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons127713

Kim,  Hyeongwoo
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons214986

Bernard,  Florian
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons101676

Habermann,  Marc
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

arXiv:1809.03658.pdf
(Preprint), 6MB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Liu, L., Xu, W., Zollhöfer, M., Kim, H., Bernard, F., Habermann, M., et al. (2018). Neural Animation and Reenactment of Human Actor Videos. Retrieved from http://arxiv.org/abs/1809.03658.


Zitierlink: https://hdl.handle.net/21.11116/0000-0002-5E06-F
Zusammenfassung
We propose a method for generating (near) video-realistic animations of real
humans under user control. In contrast to conventional human character
rendering, we do not require the availability of a production-quality
photo-realistic 3D model of the human, but instead rely on a video sequence in
conjunction with a (medium-quality) controllable 3D template model of the
person. With that, our approach significantly reduces production cost compared
to conventional rendering approaches based on production-quality 3D models, and
can also be used to realistically edit existing videos. Technically, this is
achieved by training a neural network that translates simple synthetic images
of a human character into realistic imagery. For training our networks, we
first track the 3D motion of the person in the video using the template model,
and subsequently generate a synthetically rendered version of the video. These
images are then used to train a conditional generative adversarial network that
translates synthetic images of the 3D model into realistic imagery of the
human. We evaluate our method for the reenactment of another person that is
tracked in order to obtain the motion data, and show video results generated
from artist-designed skeleton motion. Our results outperform the
state-of-the-art in learning-based human image synthesis. Project page:
http://gvv.mpi-inf.mpg.de/projects/wxu/HumanReenactment/