论文标题

未来的梯度下降,用于调整在线推荐系统中的时间变化数据分布

Future Gradient Descent for Adapting the Temporal Shifting Data Distribution in Online Recommendation Systems

论文作者

Ye, Mao, Jiang, Ruichen, Wang, Haoxiang, Choudhary, Dhruv, Du, Xiaocong, Bhushanam, Bhargav, Mokhtari, Aryan, Kejariwal, Arun, Liu, Qiang

论文摘要

学习在线推荐模型的关键挑战之一是时间域变化,这会导致培训与测试数据分布之间的不匹配以及域的概括错误。为了克服,我们建议学习一个未来的梯度生成器,该生成器可以预测培训未来数据分布的梯度信息,以便可以对建议模型进行培训,就像我们能够展望其部署的未来一样。与批处理更新相比,广泛使用的范式,我们的理论表明,所提出的算法达到了较小的时间域概括误差,该误差通过梯度变异项在局部遗憾中衡量。我们通过与各种代表性基准进行比较来证明经验优势。

One of the key challenges of learning an online recommendation model is the temporal domain shift, which causes the mismatch between the training and testing data distribution and hence domain generalization error. To overcome, we propose to learn a meta future gradient generator that forecasts the gradient information of the future data distribution for training so that the recommendation model can be trained as if we were able to look ahead at the future of its deployment. Compared with Batch Update, a widely used paradigm, our theory suggests that the proposed algorithm achieves smaller temporal domain generalization error measured by a gradient variation term in a local regret. We demonstrate the empirical advantage by comparing with various representative baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源