English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Text-based Editing of Talking-head Video

Fried, O., Tewari, A., Zollhöfer, M., Finkelstein, A., Shechtman, E., Goldman, D. B., et al. (2019). Text-based Editing of Talking-head Video. Retrieved from http://arxiv.org/abs/1906.01524.

Item is

Files

show Files
hide Files
:
arXiv:1906.01524.pdf (Preprint), 11MB
Name:
arXiv:1906.01524.pdf
Description:
File downloaded from arXiv at 2019-07-09 10:32 A version with higher resolution images can be downloaded from the authors' website
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Fried, Ohad1, Author
Tewari, Ayush2, Author           
Zollhöfer, Michael1, Author           
Finkelstein, Adam1, Author
Shechtman, Eli1, Author
Goldman, Dan B.1, Author
Genova, Kyle1, Author
Jin, Zeyu1, Author
Theobalt, Christian2, Author           
Agrawala, Maneesh1, Author
Affiliations:
1External Organizations, ou_persistent22              
2Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR,Computer Science, Learning, cs.LG
 Abstract: Editing talking-head video to change the speech content or to remove filler
words is challenging. We propose a novel method to edit talking-head video
based on its transcript to produce a realistic output video in which the
dialogue of the speaker has been modified, while maintaining a seamless
audio-visual flow (i.e. no jump cuts). Our method automatically annotates an
input talking-head video with phonemes, visemes, 3D face pose and geometry,
reflectance, expression and scene illumination per frame. To edit a video, the
user has to only edit the transcript, and an optimization strategy then chooses
segments of the input corpus as base material. The annotated parameters
corresponding to the selected segments are seamlessly stitched together and
used to produce an intermediate video representation in which the lower half of
the face is rendered with a parametric face model. Finally, a recurrent video
generation network transforms this representation to a photorealistic video
that matches the edited transcript. We demonstrate a large variety of edits,
such as the addition, removal, and alteration of words, as well as convincing
language translation and full sentence synthesis.

Details

show
hide
Language(s): eng - English
 Dates: 2019-06-042019
 Publication Status: Published online
 Pages: 14 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1906.01524
URI: http://arxiv.org/abs/1906.01524
BibTex Citekey: Fried_arXiv1906.01524
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show