Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Conference Paper

Evaluation of Policy Gradient Methods and Variants on the Cart-Pole Benchmark

There are no MPG-Authors in the publication available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Riedmiller, M., Peters, J., & Schaal, S. (2007). Evaluation of Policy Gradient Methods and Variants on the Cart-Pole Benchmark. In 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning (pp. 254-261). Los Alamitos, CA, USA: IEEE Computer Society.

Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-CE1B-8
In this paper, we evaluate different versions from the three main kinds of model-free policy gradient methods, i.e., finite difference gradients, ‘vanilla‘ policy gradients and natural policy gradients. Each of these methods is first presented in its simple form and subsequently refined and optimized. By carrying out numerous experiments on the cart pole regulator benchmark we aim to provide a useful baseline for future research on parameterized policy search algorithms. Portable C++ code is provided for both plant and algorithms; thus, the results in this paper can be reevaluated, reused and new algorithms can be inserted with ease.