Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Learning Operational Space Control

Peters, J., & Schaal, S. (2007). Learning Operational Space Control. In G. Sukhatme, S. Schaal, W. Burgard, & D. Fox (Eds.), Robotics: Science and Systems II (pp. 255-262). Cambridge, MA, USA: MIT Press.

Item is

Externe Referenzen

ausblenden:
externe Referenz:
http://www.roboticsproceedings.org/rss02/p33.pdf (Verlagsversion)
Beschreibung:
-
OA-Status:

Urheber

ausblenden:
 Urheber:
Peters, J1, Autor           
Schaal, S, Autor           
Affiliations:
1External Organizations, ou_persistent22              

Inhalt

ausblenden:
Schlagwörter: -
 Zusammenfassung: While operational space control is of essential importance
for robotics and well-understood from an analytical
point of view, it can be prohibitively hard to achieve accurate
control in face of modeling errors, which are inevitable in
complex robots, e.g., humanoid robots. In such cases, learning
control methods can offer an interesting alternative to analytical
control algorithms. However, the resulting learning problem is
ill-defined as it requires to learn an inverse mapping of a
usually redundant system, which is well known to suffer from
the property of non-convexity of the solution space, i.e., the
learning system could generate motor commands that try to
steer the robot into physically impossible configurations. A first
important insight for this paper is that, nevertheless, a physically
correct solution to the inverse problem does exit when learning
of the inverse map is performed in a suitable piecewise linear
way. The second crucial component for our work is based on
a recent insight that many operational space controllers can be
understood in terms of a constraint optimal control problem.
The cost function associated with this optimal control problem
allows us to formulate a learning algorithm that automatically
synthesizes a globally consistent desired resolution of redundancy
while learning the operational space controller. From the view
of machine learning, the learning problem corresponds to a
reinforcement learning problem that maximizes an immediate
reward and that employs an expectation-maximization policy
search algorithm. Evaluations on a three degrees of freedom
robot arm illustrate the feasibility of the suggested approach.

Details

ausblenden:
Sprache(n):
 Datum: 2007-04
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: DOI: 10.15607/RSS.2006.II.033
BibTex Citekey: 5048
 Art des Abschluß: -

Veranstaltung

ausblenden:
Titel: Robotics: Science and Systems II (RSS 2006)
Veranstaltungsort: Philadelphia, PA, USA
Start-/Enddatum: 2006-08-16 - 2006-08-19

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

ausblenden:
Titel: Robotics: Science and Systems II
Genre der Quelle: Konferenzband
 Urheber:
Sukhatme, GS, Herausgeber
Schaal, S, Herausgeber
Burgard, W, Herausgeber
Fox, D, Herausgeber
Affiliations:
-
Ort, Verlag, Ausgabe: Cambridge, MA, USA : MIT Press
Seiten: - Band / Heft: - Artikelnummer: - Start- / Endseite: 255 - 262 Identifikator: ISBN: 978-0-262-69348-6