Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Conference Paper

PAC-Bayesian Analysis of Contextual Bandits


Seldin,  Y
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Seldin, Y., Auer, P., Laviolette, F., Shawe-Taylor, J., & Ortner, R. (2012). PAC-Bayesian Analysis of Contextual Bandits. In J. Shaw-Taylor, R. Zemel, P. Bartlett, F. Pereira, & K. Weinberger (Eds.), Advances in Neural Information Processing Systems 24 (pp. 1683-1691). Red Hook, NY, USA: Curran.

Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-B884-D
We derive an instantaneous (per-round) data-dependent regret bound for stochastic multiarmed bandits with side information (also known as contextual bandits). The scaling of our regret bound with the number of states (contexts) N goes as \sqrtN I_{ρ_t}(S;A)}, where I_{ρ_t}(S;A) is the mutual information between states and actions (the side information) used by the algorithm at round t. If the algorithm uses all the side information, the regret bound scales as \sqrt{N \ln K}, where K is the number of actions (arms). However, if the side information I_{ρ_t}(S;A) is not fully used, the regret bound is significantly tighter. In the extreme case, when I_{ρ_t(S;A) = 0, the dependence on the number of states reduces from linear to logarithmic. Our analysis allows to provide the algorithm large amount of side information, let the algorithm to decide which side information is relevant for the task, and penalize the algorithm only for the side information that it is using de facto. We also present an algorithm for multiarmed bandits with side information with computational complexity that is a linear in the number of actions.