论文标题
提高信息比:汤普森抽样的信息理论分析
Lifting the Information Ratio: An Information-Theoretic Analysis of Thompson Sampling for Contextual Bandits
论文作者
论文摘要
我们研究了著名的汤普森采样算法的贝叶斯遗憾,这些算法在具有二元损失和对抗性的上下文的上下文匪徒中。我们通过考虑根据未知模型参数定义的信息比的提升版本,而不是以前在同一设置的先前工作中完成的最佳动作或最佳策略,将\ cite {rvr16}的信息理论观点调整为上下文设置。这使我们能够通过非常简单的证据来束缚先前分布的熵,并且没有关于可能性或先验的结构假设。具有无限熵的先验的扩展只需要对lipschitz的假设。一个有趣的特殊情况是带有$ d $维的参数,$ k $ actions和Lipschitz logits的逻辑匪徒,我们为此提供了$ \ widetilde {o}(\ sqrt {dkt})$遗憾的上限,这并不取决于Sigmoid链路最小的Sigmoid Link函数的最小范围。
We study the Bayesian regret of the renowned Thompson Sampling algorithm in contextual bandits with binary losses and adversarially-selected contexts. We adapt the information-theoretic perspective of \cite{RvR16} to the contextual setting by considering a lifted version of the information ratio defined in terms of the unknown model parameter instead of the optimal action or optimal policy as done in previous works on the same setting. This allows us to bound the regret in terms of the entropy of the prior distribution through a remarkably simple proof, and with no structural assumptions on the likelihood or the prior. The extension to priors with infinite entropy only requires a Lipschitz assumption on the log-likelihood. An interesting special case is that of logistic bandits with $d$-dimensional parameters, $K$ actions, and Lipschitz logits, for which we provide a $\widetilde{O}(\sqrt{dKT})$ regret upper-bound that does not depend on the smallest slope of the sigmoid link function.