English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Face2Face: Real-time Face Capture and Reenactment of RGB Videos

Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., & Nießner, M. (2020). Face2Face: Real-time Face Capture and Reenactment of RGB Videos. Retrieved from https://arxiv.org/abs/2007.14808.

Item is

Basic

show hide
Genre: Paper
Latex : {Face2Face}: {R}eal-time Face Capture and Reenactment of {RGB} Videos

Files

show Files
hide Files
:
arXiv:2007.14808.pdf (Preprint), 7MB
Name:
arXiv:2007.14808.pdf
Description:
File downloaded from arXiv at 2021-02-08 11:42
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Thies, Justus1, Author           
Zollhöfer, Michael2, Author           
Stamminger, Marc1, Author           
Theobalt, Christian2, Author                 
Nießner, Matthias1, Author           
Affiliations:
1External Organizations, ou_persistent22              
2Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: We present Face2Face, a novel approach for real-time facial reenactment of a
monocular target video sequence (e.g., Youtube video). The source sequence is
also a monocular video stream, captured live with a commodity webcam. Our goal
is to animate the facial expressions of the target video by a source actor and
re-render the manipulated output video in a photo-realistic fashion. To this
end, we first address the under-constrained problem of facial identity recovery
from monocular video by non-rigid model-based bundling. At run time, we track
facial expressions of both source and target video using a dense photometric
consistency measure. Reenactment is then achieved by fast and efficient
deformation transfer between source and target. The mouth interior that best
matches the re-targeted expression is retrieved from the target sequence and
warped to produce an accurate fit. Finally, we convincingly re-render the
synthesized target face on top of the corresponding video stream such that it
seamlessly blends with the real-world illumination. We demonstrate our method
in a live setup, where Youtube videos are reenacted in real time.

Details

show
hide
Language(s): eng - English
 Dates: 2020-07-292020
 Publication Status: Published online
 Pages: 12 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2007.14808
BibTex Citekey: Thies_2007.14808
URI: https://arxiv.org/abs/2007.14808
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show