English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Meeting Abstract

Automatic Gesture Generation for Virtual Humans with Deep and Temporal Learning

MPS-Authors
/persons/resource/persons214603

Ferstl,  Y
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource

Link
(Any fulltext)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Ferstl, Y., & McDonnell, R. (2017). Automatic Gesture Generation for Virtual Humans with Deep and Temporal Learning. In 3rd International Workshop on Virtual Social Interaction (VSI 2017).


Cite as: https://hdl.handle.net/21.11116/0000-0000-C574-F
Abstract
With increasingly sophisticated technical and visual design, virtual humans are now finding numerous applications, as instructors in virtual classrooms, for human-computer interfaces, in video games, and more. As their behaviour becomes more and more automated, a key challenge remains the generation of believable gestures that are tightly linked to the uttered content. Without appropriate non-verbal behaviour, virtual humans quickly appear oddly rigid, eerie, and unappealing. With our proposed framework, we aim to develop more life-like virtual conversational agents by pro viding a fully automatic system for gesture generation from live speech. Our model does not rely on hand-annotation of data, and bases the selection of gestural behaviour on prosodic, syntactic, as well as semantic analyses. Furthermore it is not restricted to a set of predefined gestural signs. A major strength of the framework is the utilization of deep learning for the association of speech with gestures with the inclusion of temporal relations between gestures.