Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Hierarchical Spatio-Temporal Morphable Models for Representation of complex movements for Imitation Learning

MPG-Autoren
/persons/resource/persons83791

Bakir,  GH
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83919

Franz,  MO
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

pdf2037.pdf
(beliebiger Volltext), 717KB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Ilg, W., Bakir, G., Franz, M., & Giese, M. (2003). Hierarchical Spatio-Temporal Morphable Models for Representation of complex movements for Imitation Learning. In U. Nunes, A. de Almeida, A. Bejczy, K. Kosuge, & J. Machado (Eds.), 11th International Conference on Advanced Robotics (ICAR 2003) (pp. 453-458).


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-DD3A-D
Zusammenfassung
Imitation learning is a promising technique for teaching robots complex movement sequences. One key problem in this area is the transfer of perceived movement characteristics from perception to action. For the solution of this problem, representations are required that are suitable for the analysis and the synthesis of complex action sequences. We describe the method of Hierarchical Spatio-Temporal Morphable Models that allows an automatic segmentation of movements sequences into movement primitives, and a modeling of these primitives by morphing between a set of prototypical trajectories. We use HSTMMs in an imitation learning task for human writing movements. The models are learned from recorded trajectories and transferred to a human-like robot arm. Due to the generalization proper- ties of our movement representation, the arm is capable of synthesizing new writing movements with only a few learning examples.