English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

From Motor Learning to Interaction Learning in Robots

MPS-Authors
/persons/resource/persons84135

Peters,  J
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Sigaud, O., & Peters, J. (2009). From Motor Learning to Interaction Learning in Robots. In 7ème Journées Nationales de la Recherche en Robotique (JNRR 2009) (pp. 189-195).


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-C216-5
Abstract
The number of advanced robot systems has been increasing in recent years yielding a
large variety of versatile designs with many degrees of freedom. These robots have the potential of
being applicable in uncertain tasks outside well-structured industrial settings. However, the complexity
of both systems and tasks is often beyond the reach of classical robot programming methods. As a
result, a more autonomous solution for robot task acquisition is needed where robots adaptively adjust
their behaviour to the encountered situations and required tasks.
Learning approaches pose one of the most appealing ways to achieve this goal. However, while
learning approaches are of high importance for robotics, we cannot simply use off-the-shelf methods
from the machine learning community as these usually do not scale into the domains of robotics due
to excessive computational cost as well as a lack of scalability. Instead, domain appropriate approaches
are needed. We focus here on several core domains of robot learning. For accurate task execution,
we need motor learning capabilities. For fast learning of the motor tasks, imitation learning offers the
most promising approach. Self improvement requires reinforcement learning approaches that scale into
the domain of complex robots. Finally, for efficient interaction of humans with robot systems, we will
need a form of interaction learning. This contribution provides a general introduction to these issues
and briefly presents the contributions of the related book chapters to the corresponding research topics.