论文标题

Fairrank:新闻推荐的公平意识单塔排名框架

FairRank: Fairness-aware Single-tower Ranking Framework for News Recommendation

论文作者

Wu, Chuhan, Wu, Fangzhao, Qi, Tao, Huang, Yongfeng

论文摘要

单塔模型被广泛用于新闻推荐的排名阶段,以根据用户行为指示的用户兴趣准确地对候选新闻进行准确对候选新闻进行排名。但是,这些模型可以轻松地继承与用户敏感属性(例如人口统计学)相关的偏见,该属性在训练点击数据中编码,并且可能会生成对具有某些属性的用户不公平的建议结果。在本文中,我们提出了Fairrank,这是新闻推荐的公平意识的单身排名框架。由于候选新闻的选择可能会偏见,因此我们建议使用共享的候选用户模型与实际显示的候选新闻和随机新闻匹配用户兴趣,以了解反映用户对候选新闻的兴趣和候选用户的兴趣的候选用户嵌入,并表明候选人的用户嵌入表明内在用户的兴趣。我们将对抗性学习应用于他们俩,以减少敏感用户属性带来的偏见。此外,我们使用KL损失将从两个用户嵌入的属性标签正规化为相似的属性标签,这可以使模型捕获较少的候选人意识到的偏差信息。在两个数据集上进行的广泛实验表明,Fairrank可以改善具有较小绩效损失的各种单人新闻排名模型的公平性。

Single-tower models are widely used in the ranking stage of news recommendation to accurately rank candidate news according to their fine-grained relatedness with user interest indicated by user behaviors. However, these models can easily inherit the biases related to users' sensitive attributes (e.g., demographics) encoded in training click data, and may generate recommendation results that are unfair to users with certain attributes. In this paper, we propose FairRank, which is a fairness-aware single-tower ranking framework for news recommendation. Since candidate news selection can be biased, we propose to use a shared candidate-aware user model to match user interest with a real displayed candidate news and a random news, respectively, to learn a candidate-aware user embedding that reflects user interest in candidate news and a candidate-invariant user embedding that indicates intrinsic user interest. We apply adversarial learning to both of them to reduce the biases brought by sensitive user attributes. In addition, we use a KL loss to regularize the attribute labels inferred from the two user embeddings to be similar, which can make the model capture less candidate-aware bias information. Extensive experiments on two datasets show that FairRank can improve the fairness of various single-tower news ranking models with minor performance losses.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源