English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Relative Entropy Policy Search

MPS-Authors
/persons/resource/persons84135

Peters,  J
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

/persons/resource/persons84097

Mülling,  K
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83782

Altun,  Y
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Peters, J., Mülling, K., & Altun, Y. (2010). Relative Entropy Policy Search. In Twenty-Fourth National Conference on Artificial Intelligence (AAAI-10) (pp. 1607-1612). Menlo Park, CA, USA: AAAI Press.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-BF48-A
Abstract
Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients (Bagnell and Schneider 2003), many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems.