English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Path integral control and bounded rationality

Braun, D., Ortega, P., Theodorou, E., & Schaal, S. (2011). Path integral control and bounded rationality. In IEEE Symposium on Adaptive Dynamic Programming And Reinforcement Learning (ADPRL 2011) (pp. 202-209). Piscataway, NJ, USA: IEEE.

Item is

Files

show Files

Locators

show
hide
Description:
-

Creators

show
hide
 Creators:
Braun, DA1, Author              
Ortega, PA1, Author              
Theodorou, E, Author
Schaal, S1, Author              
Affiliations:
1University of Southern California, Los Angeles, USA, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: Path integral methods have recently been shown to be applicable to a very general class of optimal control problems. Here we examine the path integral formalism from a decision-theoretic point of view, since an optimal controller can always be regarded as an instance of a perfectly rational decision-maker that chooses its actions so as to maximize its expected utility. The problem with perfect rationality is, however, that finding optimal actions is often very difficult due to prohibitive computational resource costs that are not taken into account. In contrast, a bounded rational decision-maker has only limited resources and therefore needs to strike some compromise between the desired utility and the required resource costs. In particular, we suggest an information-theoretic measure of resource costs that can be derived axiomatically. As a consequence we obtain a variational principle for choice probabilities that trades off maximizing a given utility criterion and avoiding resource costs that arise due to deviating from initially given default choice probabilities. The resulting bounded rational policies are in general probabilistic. We show that the solutions found by the path integral formalism are such bounded rational policies. Furthermore, we show that the same formalism generalizes to discrete control problems, leading to linearly solvable bounded rational control policies in the case of Markov systems. Importantly, Bellman's optimality principle is not presupposed by this variational principle, but it can be derived as a limit case. This suggests that the information-theoretic formalization of bounded rationality might serve as a general principle in control design that unifies a number of recently reported approximate optimal control methods both in the continuous and discrete domain.

Details

show
hide
Language(s):
 Dates: 2011-07
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1109/ADPRL.2011.5967366
BibTex Citekey: BraunOTS2011
 Degree: -

Event

show
hide
Title: IEEE Symposium on Adaptive Dynamic Programming And Reinforcement Learning (ADPRL 2011)
Place of Event: Paris, France
Start-/End Date: 2011-04-11 - 2011-04-15

Legal Case

show

Project information

show

Source 1

show
hide
Title: IEEE Symposium on Adaptive Dynamic Programming And Reinforcement Learning (ADPRL 2011)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: Piscataway, NJ, USA : IEEE
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 202 - 209 Identifier: ISBN: 978-1-4244-9887-1