English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Derivatives of Logarithmic Stationary Distributions for Policy Gradient Reinforcement Learning

Morimura, T., Uchibe, E., Yoshimoto, J., Peters, J., & Doya, K. (2010). Derivatives of Logarithmic Stationary Distributions for Policy Gradient Reinforcement Learning. Neural computation, 22(2), 342-376. doi:10.1162/neco.2009.12-08-922.

Item is

Files

show Files

Locators

show
hide
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Morimura, T, Author
Uchibe, E, Author
Yoshimoto, J, Author
Peters, J1, 2, Author           
Doya, K, Author
Affiliations:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distribution to changes in the policy parameter. Although the bias introduced by this omission can be reduced by setting the forgetting rate 947; for the value functions close to 1, these algorithms do not permit 947; to be set exactly at 947; = 1. In this article, we propose a method for estimating the log stationary state distribution derivative (LSD) as a useful form of the derivative of the stationary state distribution through backward Markov chain formulation and a temporal difference learning framework. A new policy gradient (PG) framework with an LSD is also proposed, in which the average reward gradient can be estimated by setting //!--
MFG_und--//amp;947; = 0, so it becomes unnecessary to learn the value functions. We also test the performance of the proposed algorithms using simple benchmark tasks and show that these can improve the performances of existing PG methods.

Details

show
hide
Language(s):
 Dates: 2010-02
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1162/neco.2009.12-08-922
BibTex Citekey: 5904
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Neural computation
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Cambridge, Mass. : MIT Press
Pages: - Volume / Issue: 22 (2) Sequence Number: - Start / End Page: 342 - 376 Identifier: ISSN: 0899-7667
CoNE: https://pure.mpg.de/cone/journals/resource/954925561591