Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Meeting Abstract

Automatic Gesture Generation for Virtual Humans with Deep and Temporal Learning


Ferstl,  Y
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource

(Any fulltext)

Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Ferstl, Y., & McDonnell, R. (2017). Automatic Gesture Generation for Virtual Humans with Deep and Temporal Learning. In 3rd International Workshop on Virtual Social Interaction (VSI 2017).

Cite as: http://hdl.handle.net/21.11116/0000-0000-C574-F
With increasingly sophisticated technical and visual design, virtual humans are now finding numerous applications, as instructors in virtual classrooms, for human-computer interfaces, in video games, and more. As their behaviour becomes more and more automated, a key challenge remains the generation of believable gestures that are tightly linked to the uttered content. Without appropriate non-verbal behaviour, virtual humans quickly appear oddly rigid, eerie, and unappealing. With our proposed framework, we aim to develop more life-like virtual conversational agents by pro viding a fully automatic system for gesture generation from live speech. Our model does not rely on hand-annotation of data, and bases the selection of gestural behaviour on prosodic, syntactic, as well as semantic analyses. Furthermore it is not restricted to a set of predefined gestural signs. A major strength of the framework is the utilization of deep learning for the association of speech with gestures with the inclusion of temporal relations between gestures.