English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Learning Speech-driven 3D Conversational Gestures from Video

Habibie, I., Xu, W., Mehta, D., Liu, L., Seidel, H.-P., Pons-Moll, G., et al. (2021). Learning Speech-driven 3D Conversational Gestures from Video. Retrieved from https://arxiv.org/abs/2102.06837.

Item is

Basic

show hide
Genre: Paper
Latex : Learning Speech-driven {3D} Conversational Gestures from Video

Files

show Files
hide Files
:
arXiv:2102.06837.pdf (Preprint), 12MB
Name:
arXiv:2102.06837.pdf
Description:
File downloaded from arXiv at 2021-11-04 13:38
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Habibie, Ikhsanul1, Author           
Xu, Weipeng2, Author           
Mehta, Dushyant1, Author           
Liu, Lingjie1, Author           
Seidel, Hans-Peter1, Author           
Pons-Moll, Gerard3, Author           
Elgharib, Mohamed4, Author           
Theobalt, Christian4, Author           
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              
3Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_persistent22              
4Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society, ou_3311330              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: We propose the first approach to automatically and jointly synthesize both
the synchronous 3D conversational body and hand gestures, as well as 3D face
and head animations, of a virtual character from speech input. Our algorithm
uses a CNN architecture that leverages the inherent correlation between facial
expression and hand gestures. Synthesis of conversational body gestures is a
multi-modal problem since many similar gestures can plausibly accompany the
same input speech. To synthesize plausible body gestures in this setting, we
train a Generative Adversarial Network (GAN) based model that measures the
plausibility of the generated sequences of 3D body motion when paired with the
input audio features. We also contribute a new way to create a large corpus of
more than 33 hours of annotated body, hand, and face data from in-the-wild
videos of talking people. To this end, we apply state-of-the-art monocular
approaches for 3D body and hand pose estimation as well as dense 3D face
performance capture to the video corpus. In this way, we can train on orders of
magnitude more data than previous algorithms that resort to complex in-studio
motion capture solutions, and thereby train more expressive synthesis
algorithms. Our experiments and user study show the state-of-the-art quality of
our speech-synthesized full 3D character animations.

Details

show
hide
Language(s): eng - English
 Dates: 2021-02-122021
 Publication Status: Published online
 Pages: 15 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2102.06837
URI: https://arxiv.org/abs/2102.06837
BibTex Citekey: Habibie_2102.06837
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show