English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Recurrent Policy Gradients

Wierstra, D., Förster, A., Peters, J., & Schmidhuber, J. (2010). Recurrent Policy Gradients. Logic Journal of the IGPL, 18(5), 620-634. doi:10.1093/jigpal/jzp049.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/11858/00-001M-0000-0013-BDD0-6 Version Permalink: http://hdl.handle.net/21.11116/0000-0002-6A91-3
Genre: Journal Article

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Wierstra, D, Author
Förster, A, Author
Peters, J.1, 2, Author              
Schmidhuber, J, Author
Affiliations:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: Reinforcement learning for partially observable Markov decision problems (POMDPs) is a challenge as it requires policies with an internal state. Traditional approaches suffer significantly from this shortcoming and usually make strong assumptions on the problem domain such as perfect system models, state-estimators and a Markovian hidden system. Recurrent neural networks (RNNs) offer a natural framework for dealing with policy learning using hidden state and require only few limiting assumptions. As they can be trained well using gradient descent, they are suited for policy gradient approaches. In this paper, we present a policy gradient method, the Recurrent Policy Gradient which constitutes a model-free reinforcement learning method. It is aimed at training limited-memory stochastic policies on problems which require long-term memories of past observations. The approach involves approximating a policy gradient for a recurrent neural network by backpropagating return-weighted characteristic eligibilities through time. Using a ‘‘Long Short-Term Memory’’ RNN architecture, we are able to outperform previous RL methods on three important benchmark tasks. Furthermore, we show that using history-dependent baselines helps reducing estimation variance significantly, thus enabling our approach to tackle more challenging, highly stochastic environments.

Details

show
hide
Language(s):
 Dates: 2010-10
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1093/jigpal/jzp049
BibTex Citekey: 5879
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Logic Journal of the IGPL
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Oxford, UK : Oxford University Press
Pages: - Volume / Issue: 18 (5) Sequence Number: - Start / End Page: 620 - 634 Identifier: ISSN: 1367-0751
CoNE: https://pure.mpg.de/cone/journals/resource/110976527305931