论文标题
学会以匪徒反馈在基于位置的模型中排名
Learning to Rank in the Position Based Model with Bandit Feedback
论文作者
论文摘要
个性化是许多在线体验的关键方面。特别是,内容排名通常是提供复杂的个性化结果的关键组成部分。通常,采用了监督的学习到级别方法,这些方法遭受了负责生产排名的生产系统在数据收集过程中引入的偏差。为了弥补这一问题,我们利用上下文的多臂匪徒。我们提出了两种著名算法的新型扩展。 linucb和线性汤普森对等级用例采样。为了说明生产环境中的偏见,我们采用了基于位置的点击模型。最后,我们通过对合成数据集进行广泛的离线实验以及面向在线A/B实验的客户进行大量脱机实验来展示拟议算法的有效性。
Personalization is a crucial aspect of many online experiences. In particular, content ranking is often a key component in delivering sophisticated personalization results. Commonly, supervised learning-to-rank methods are applied, which suffer from bias introduced during data collection by production systems in charge of producing the ranking. To compensate for this problem, we leverage contextual multi-armed bandits. We propose novel extensions of two well-known algorithms viz. LinUCB and Linear Thompson Sampling to the ranking use-case. To account for the biases in a production environment, we employ the position-based click model. Finally, we show the validity of the proposed algorithms by conducting extensive offline experiments on synthetic datasets as well as customer facing online A/B experiments.