日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

  Learning Speech-driven 3D Conversational Gestures from Video

Habibie, I., Xu, W., Mehta, D., Liu, L., Seidel, H.-P., Pons-Moll, G., Elgharib, M., & Theobalt, C. (2021). Learning Speech-driven 3D Conversational Gestures from Video. Retrieved from https://arxiv.org/abs/2102.06837.

Item is

基本情報

表示: 非表示:
アイテムのパーマリンク: https://hdl.handle.net/21.11116/0000-0009-70C7-8 版のパーマリンク: https://hdl.handle.net/21.11116/0000-0009-70C8-7
資料種別: 成果報告書
LaTeX : Learning Speech-driven {3D} Conversational Gestures from Video

ファイル

表示: ファイル
非表示: ファイル
:
arXiv:2102.06837.pdf (プレプリント), 12MB
ファイルのパーマリンク:
https://hdl.handle.net/21.11116/0000-0009-70C9-6
ファイル名:
arXiv:2102.06837.pdf
説明:
File downloaded from arXiv at 2021-11-04 13:38
OA-Status:
閲覧制限:
公開
MIMEタイプ / チェックサム:
application/pdf / [MD5]
技術的なメタデータ:
著作権日付:
-
著作権情報:
-

関連URL

表示:

作成者

表示:
非表示:
 作成者:
Habibie, Ikhsanul1, 著者           
Xu, Weipeng2, 著者           
Mehta, Dushyant1, 著者           
Liu, Lingjie1, 著者           
Seidel, Hans-Peter1, 著者           
Pons-Moll, Gerard3, 著者           
Elgharib, Mohamed4, 著者           
Theobalt, Christian4, 著者           
所属:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              
3Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_persistent22              
4Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society, ou_3311330              

内容説明

表示:
非表示:
キーワード: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 要旨: We propose the first approach to automatically and jointly synthesize both
the synchronous 3D conversational body and hand gestures, as well as 3D face
and head animations, of a virtual character from speech input. Our algorithm
uses a CNN architecture that leverages the inherent correlation between facial
expression and hand gestures. Synthesis of conversational body gestures is a
multi-modal problem since many similar gestures can plausibly accompany the
same input speech. To synthesize plausible body gestures in this setting, we
train a Generative Adversarial Network (GAN) based model that measures the
plausibility of the generated sequences of 3D body motion when paired with the
input audio features. We also contribute a new way to create a large corpus of
more than 33 hours of annotated body, hand, and face data from in-the-wild
videos of talking people. To this end, we apply state-of-the-art monocular
approaches for 3D body and hand pose estimation as well as dense 3D face
performance capture to the video corpus. In this way, we can train on orders of
magnitude more data than previous algorithms that resort to complex in-studio
motion capture solutions, and thereby train more expressive synthesis
algorithms. Our experiments and user study show the state-of-the-art quality of
our speech-synthesized full 3D character animations.

資料詳細

表示:
非表示:
言語: eng - English
 日付: 2021-02-122021
 出版の状態: オンラインで出版済み
 ページ: 15 p.
 出版情報: -
 目次: -
 査読: -
 識別子(DOI, ISBNなど): arXiv: 2102.06837
URI: https://arxiv.org/abs/2102.06837
BibTex参照ID: Habibie_2102.06837
 学位: -

関連イベント

表示:

訴訟

表示:

Project information

表示:

出版物

表示: