论文标题
知识图对比度学习以推荐
Knowledge Graph Contrastive Learning for Recommendation
论文作者
论文摘要
知识图(KGS)已被用作有用的侧面信息来提高建议质量。在这些推荐系统中,知识图信息通常包含项目之间富有成果的事实和固有的语义相关性。但是,这种方法的成功依赖于高质量的知识图,并且可能不会在两个挑战中学习质量表示:i)实体的长尾分布导致对KG增强物品表示的稀疏监督信号; ii)现实世界知识图通常是嘈杂的,并且包含项目和实体之间的主题 - 意外连接。这样的kg稀疏性和噪声使项目实现关系偏离反映其真实特征,从而大大扩大了噪声效果并阻碍了用户偏好的准确表示。 为了填补这一研究差距,我们设计了一个通用图形对比学习框架(KGCL),该框架减轻了知识图形增强建议系统的信息噪声。具体而言,我们提出了一个知识图增强模式,以抑制信息聚集中的kg噪声,并为项目提供更强大的知识表现形式。此外,我们利用KG增强过程中的其他监督信号来指导跨视图对比学习范式,从而使无偏见的梯度下降中无偏用的用户学项相互作用发挥更大的作用,并进一步抑制了噪声。在三个公共数据集上进行的广泛实验证明了我们KGCL比最先进的技术的一致性优势。 KGCL在推荐方案中还具有稀疏的用户互动,长尾和嘈杂的KG实体,在推荐方案中取得了出色的性能。我们的实施代码可从https://github.com/yuh-yang/kgcl-sigir22获得
Knowledge Graphs (KGs) have been utilized as useful side information to improve recommendation quality. In those recommender systems, knowledge graph information often contains fruitful facts and inherent semantic relatedness among items. However, the success of such methods relies on the high quality knowledge graphs, and may not learn quality representations with two challenges: i) The long-tail distribution of entities results in sparse supervision signals for KG-enhanced item representation; ii) Real-world knowledge graphs are often noisy and contain topic-irrelevant connections between items and entities. Such KG sparsity and noise make the item-entity dependent relations deviate from reflecting their true characteristics, which significantly amplifies the noise effect and hinders the accurate representation of user's preference. To fill this research gap, we design a general Knowledge Graph Contrastive Learning framework (KGCL) that alleviates the information noise for knowledge graph-enhanced recommender systems. Specifically, we propose a knowledge graph augmentation schema to suppress KG noise in information aggregation, and derive more robust knowledge-aware representations for items. In addition, we exploit additional supervision signals from the KG augmentation process to guide a cross-view contrastive learning paradigm, giving a greater role to unbiased user-item interactions in gradient descent and further suppressing the noise. Extensive experiments on three public datasets demonstrate the consistent superiority of our KGCL over state-of-the-art techniques. KGCL also achieves strong performance in recommendation scenarios with sparse user-item interactions, long-tail and noisy KG entities. Our implementation codes are available at https://github.com/yuh-yang/KGCL-SIGIR22