English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search

MPS-Authors
There are no MPG-Authors in the publication available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Guez, A., Silver, D., & Dayan, P. (2013). Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search. In P. Bartlett, F. Pereira, L. Bottou, C. Burgess, & K. Weinberger (Eds.), Advances in Neural Information Processing Systems 25 (pp. 1025-1033). Red Hook, NY, USA: Curran.


Cite as: https://hdl.handle.net/21.11116/0000-0004-C38C-2
Abstract
Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty, trading off exploration and exploitation in an ideal way. Unfortunately, finding the resulting Bayes-optimal policies is notoriously taxing, since the search space becomes enormous. In this paper we introduce a tractable, sample-based method for approximate Bayes-optimal planning which exploits Monte-Carlo tree search. Our approach outperformed prior Bayesian model-based RL algorithms by a significant margin on several well-known benchmark problems -- because it avoids expensive applications of Bayes rule within the search tree by lazily sampling models from the current beliefs. We illustrate the advantages of our approach by showing it working in an infinite state space domain which is qualitatively out of reach of almost all previous work in Bayesian exploration.