日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

会議論文

Relative Entropy Policy Search

MPS-Authors
/persons/resource/persons84135

Peters,  J
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84097

Mülling,  K
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83782

Altun,  Y
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Peters, J., Mülling, K., & Altun, Y. (2010). Relative Entropy Policy Search. In M., Fom, & D., Poole (Eds.), Twenty-Fourth National Conference on Artificial Intelligence (AAAI-10) (pp. 1607-1612). Menlo Park, CA, USA: AAAI Press.


引用: https://hdl.handle.net/11858/00-001M-0000-0013-BF48-A
要旨
Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred
by premature convergence and implausible solutions.
As first suggested in the context of covariant policy
gradients (Bagnell and Schneider 2003), many of these
problems may be addressed by constraining the information
loss. In this paper, we continue this path of reasoning
and suggest the Relative Entropy Policy Search
(REPS) method. The resulting method differs significantly
from previous policy gradient approaches and
yields an exact update step. It works well on typical
reinforcement learning benchmark problems.