English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

Elgharib, M., Mallikarjun B R, Tewari, A., Kim, H., Liu, W., Seidel, H.-P., et al. (2019). EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment. Retrieved from http://arxiv.org/abs/1905.10822.

Item is

Files

show Files
hide Files
:
arXiv:1905.10822.pdf (Preprint), 4MB
Name:
arXiv:1905.10822.pdf
Description:
File downloaded from arXiv at 2019-07-04 08:45 Project Page: http://gvv.mpi-inf.mpg.de/projects/EgoFace/
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show
hide
Locator:
http://gvv.mpi-inf.mpg.de/projects/EgoFace/ (Supplementary material)
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Elgharib, Mohamed1, Author           
Mallikarjun B R1, Author           
Tewari, Ayush1, Author           
Kim, Hyeongwoo1, Author           
Liu, Wentao1, Author           
Seidel, Hans-Peter1, Author           
Theobalt, Christian1, Author           
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR
 Abstract: Face performance capture and reenactment techniques use multiple cameras and
sensors, positioned at a distance from the face or mounted on heavy wearable
devices. This limits their applications in mobile and outdoor environments. We
present EgoFace, a radically new lightweight setup for face performance capture
and front-view videorealistic reenactment using a single egocentric RGB camera.
Our lightweight setup allows operations in uncontrolled environments, and lends
itself to telepresence applications such as video-conferencing from dynamic
environments. The input image is projected into a low dimensional latent space
of the facial expression parameters. Through careful adversarial training of
the parameter-space synthetic rendering, a videorealistic animation is
produced. Our problem is challenging as the human visual system is sensitive to
the smallest face irregularities that could occur in the final results. This
sensitivity is even stronger for video results. Our solution is trained in a
pre-processing stage, through a supervised manner without manual annotations.
EgoFace captures a wide variety of facial expressions, including mouth
movements and asymmetrical expressions. It works under varying illuminations,
background, movements, handles people from different ethnicities and can
operate in real time.

Details

show
hide
Language(s): eng - English
 Dates: 2019-05-262019
 Publication Status: Published online
 Pages: 10 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1905.10822
URI: http://arxiv.org/abs/1905.10822
BibTex Citekey: Elgharib_arXiv1905.10822
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show