English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Report

PAC-Bayesian Analysis of Martingales and Multiarmed Bandits

MPS-Authors
/persons/resource/persons84206

Seldin,  Y
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84135

Peters,  J.
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource

http://arxiv.org/abs/1105.2416
(Publisher version)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

TR-2011-Seldin.pdf
(Any fulltext), 235KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Seldin, Y., Laciolette, F., Shaw-Taylor, J., Peters, J., & Auer, P.(2011). PAC-Bayesian Analysis of Martingales and Multiarmed Bandits. Tübingen, Germany: Max Planck Institute for Biological Cybernetics.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-BBE6-6
Abstract
We present two alternative ways to apply PAC-Bayesian analysis to sequences of dependent random variables. The first is based on a new lemma that enables to bound expectations of convex functions of certain dependent random variables by expectations of the same functions of independent Bernoulli random variables. This lemma provides an alternative tool to Hoeffding-Azuma inequality to bound concentration of martingale values. Our second approach is based on integration of Hoeffding-Azuma inequality with PAC-Bayesian analysis. We also introduce a way to apply PAC-Bayesian analysis in situation of limited feedback. We combine the new tools to derive PAC-Bayesian generalization and regret bounds for the multiarmed bandit problem. Although our regret bound is not yet as tight as state-of-the-art regret bounds based on other well-established techniques, our results significantly expand the range of potential applications of PAC-Bayesian analysis and introduce a new analysis tool to reinforcement learning and many other fields, where martingales and limited feedback are encountered.