English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Using Expectation-Maximization for Reinforcement Learning

Dayan, P., & Hinton, G. (1997). Using Expectation-Maximization for Reinforcement Learning. Neural computation, 9(2), 271-278. doi:10.1162/neco.1997.9.2.271.

Item is

Files

show Files

Locators

show
hide
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Dayan, P1, Author           
Hinton, GE, Author
Affiliations:
1External Organizations, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: We discuss Hinton's (1989) relative payoff procedure (RPP), a static reinforcement learning algorithm whose foundation is not stochastic gradient ascent. We show circumstances under which applying the RPP is guaranteed to increase the mean return, even though it can make large changes in the values of the parameters. The proof is based on a mapping between the RPP and a form of the expectation-maximization procedure of Dempster, Laird, and Rubin (1977).

Details

show
hide
Language(s):
 Dates: 1997-02
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1162/neco.1997.9.2.271
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Neural computation
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Cambridge, Mass. : MIT Press
Pages: - Volume / Issue: 9 (2) Sequence Number: - Start / End Page: 271 - 278 Identifier: ISSN: 0899-7667
CoNE: https://pure.mpg.de/cone/journals/resource/954925561591