Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Path integral control and bounded rationality

Braun, D., Ortega, P., Theodorou, E., & Schaal, S. (2011). Path integral control and bounded rationality. In IEEE Symposium on Adaptive Dynamic Programming And Reinforcement Learning (ADPRL 2011) (pp. 202-209). Piscataway, NJ, USA: IEEE.

Item is

Externe Referenzen

einblenden:
ausblenden:
Beschreibung:
-
OA-Status:

Urheber

einblenden:
ausblenden:
 Urheber:
Braun, DA1, Autor           
Ortega, PA1, Autor           
Theodorou, E, Autor
Schaal, S1, Autor           
Affiliations:
1University of Southern California, Los Angeles, USA, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Path integral methods have recently been shown to be applicable to a very general class of optimal control problems. Here we examine the path integral formalism from a decision-theoretic point of view, since an optimal controller can always be regarded as an instance of a perfectly rational decision-maker that chooses its actions so as to maximize its expected utility. The problem with perfect rationality is, however, that finding optimal actions is often very difficult due to prohibitive computational resource costs that are not taken into account. In contrast, a bounded rational decision-maker has only limited resources and therefore needs to strike some compromise between the desired utility and the required resource costs. In particular, we suggest an information-theoretic measure of resource costs that can be derived axiomatically. As a consequence we obtain a variational principle for choice probabilities that trades off maximizing a given utility criterion and avoiding resource costs that arise due to deviating from initially given default choice probabilities. The resulting bounded rational policies are in general probabilistic. We show that the solutions found by the path integral formalism are such bounded rational policies. Furthermore, we show that the same formalism generalizes to discrete control problems, leading to linearly solvable bounded rational control policies in the case of Markov systems. Importantly, Bellman's optimality principle is not presupposed by this variational principle, but it can be derived as a limit case. This suggests that the information-theoretic formalization of bounded rationality might serve as a general principle in control design that unifies a number of recently reported approximate optimal control methods both in the continuous and discrete domain.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2011-07
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: DOI: 10.1109/ADPRL.2011.5967366
BibTex Citekey: BraunOTS2011
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: IEEE Symposium on Adaptive Dynamic Programming And Reinforcement Learning (ADPRL 2011)
Veranstaltungsort: Paris, France
Start-/Enddatum: 2011-04-11 - 2011-04-15

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: IEEE Symposium on Adaptive Dynamic Programming And Reinforcement Learning (ADPRL 2011)
Genre der Quelle: Konferenzband
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: Piscataway, NJ, USA : IEEE
Seiten: - Band / Heft: - Artikelnummer: - Start- / Endseite: 202 - 209 Identifikator: ISBN: 978-1-4244-9887-1