English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Learning anticipation policies for robot table tennis

MPS-Authors
/persons/resource/persons76262

Wang,  Z.
Dept. Phase Transformations; Thermodynamics and Kinetics, Max Planck Institute for Intelligent Systems, Max Planck Society;

Lampert,  C. H.
Max Planck Society;

/persons/resource/persons84097

Mülling,  K.
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

/persons/resource/persons84193

Schölkopf,  B.
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

/persons/resource/persons84135

Peters,  J.
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

External Ressource
No external resources are shared
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Wang, Z., Lampert, C. H., Mülling, K., Schölkopf, B., & Peters, J. (2011). Learning anticipation policies for robot table tennis. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2011) (pp. 332-337).


Cite as: http://hdl.handle.net/11858/00-001M-0000-0010-760F-F
Abstract
Playing table tennis is a difficult task for robots, especially due to their limitations of acceleration. A key bottleneck is the amount of time needed to reach the desired hitting position and velocity of the racket for returning the incoming ball. Here, it often does not suffice to simply extrapolate the ball's trajectory after the opponent returns it but more information is needed. Humans are able to predict the ball's trajectory based on the opponent's moves and, thus, have a considerable advantage. Hence, we propose to incorporate an anticipation system into robot table tennis players, which enables the robot to react earlier while the opponent is performing the striking movement. Based on visual observation of the opponent's racket movement, the robot can predict the aim of the opponent and adjust its movement generation accordingly. The policies for deciding how and when to react are obtained by reinforcement learning. We conduct experiments with an existing robot player to show that the learned reaction policy can significantly improve the performance of the overall system.