hide
Free keywords:
-
Abstract:
With increasingly sophisticated technical and visual design, virtual humans are now finding numerous applications, as instructors in virtual classrooms, for human-computer interfaces, in video games, and more. As their behaviour becomes more and more automated, a key challenge remains the generation of believable gestures that are tightly linked to the uttered content. Without appropriate non-verbal behaviour, virtual humans quickly appear oddly rigid, eerie, and unappealing. With our proposed framework, we aim to
develop more life-like virtual conversational agents by pro
viding a fully automatic system for gesture generation from
live speech. Our model does not rely on hand-annotation of data, and bases the selection of gestural behaviour on prosodic, syntactic, as well as semantic analyses. Furthermore it is not restricted to a set of predefined gestural signs. A major strength of the framework is the utilization of deep learning for the association of speech with gestures with the inclusion of temporal relations between gestures.