论文标题
与一般数据生成政策的反事实学习
Counterfactual Learning with General Data-generating Policies
论文作者
论文摘要
非政策评估(OPE)试图使用来自其他策略的日志数据来预测反事实策略的性能。我们通过为上下文吊销设置中的一系列全面支持和不足支持记录策略提供OPE方法来扩展其适用性。该课程包括确定性的强盗(例如上限限制)以及基于监督和无监督学习的确定性决策。我们证明,随着样本量的增加,我们的方法的预测概率与反事实策略的真实绩效有关。我们通过对部分和完全确定性的记录策略进行实验来验证我们的方法。最后,我们将其应用于评估优惠券针对策略的主要在线平台,并展示如何改善现有政策。
Off-policy evaluation (OPE) attempts to predict the performance of counterfactual policies using log data from a different policy. We extend its applicability by developing an OPE method for a class of both full support and deficient support logging policies in contextual-bandit settings. This class includes deterministic bandit (such as Upper Confidence Bound) as well as deterministic decision-making based on supervised and unsupervised learning. We prove that our method's prediction converges in probability to the true performance of a counterfactual policy as the sample size increases. We validate our method with experiments on partly and entirely deterministic logging policies. Finally, we apply it to evaluate coupon targeting policies by a major online platform and show how to improve the existing policy.