论文标题

CATA ++:一种用于推荐科学文章的协作双重专注自动编码方法

CATA++: A Collaborative Dual Attentive Autoencoder Method for Recommending Scientific Articles

论文作者

Alfarhood, Meshal, Cheng, Jianlin

论文摘要

今天的推荐系统已成为任何商业网站的重要组成部分。协作过滤方法和矩阵分解(MF)技术尤其是在推荐系统中广泛使用。但是,自然数据稀疏问题限制了他们的性能,而用户通常会与系统中很少的项目进行交互。因此,最近提出了多种混合模型,以通过将其他上下文信息纳入其学习过程中来优化MF性能。尽管这些模型提高了建议质量,但有两个主要方面可以进行进一步的改进:(1)多个模型仅关注可用上下文信息的某些部分并忽略其他部分; (2)学习侧面上下文信息的特征空间需要进一步增强。在本文中,我们介绍了一个协作双重专注自动编码器(CATA ++),以推荐科学文章。 CATA ++利用文章的内容,并通过两个并行自动编码器学习其潜在空间。我们采用注意力机制来捕获信息的最相关部分,以提出更相关的建议。在三个现实世界数据集上进行的广泛实验表明,与其他各种实验评估相比,与其他基于MF的其他最先进的MF模型相比,我们的双向学习策略已显着提高了MF的性能。我们方法的源代码可在以下网址提供:https://github.com/jianlin-cheng/cata。

Recommender systems today have become an essential component of any commercial website. Collaborative filtering approaches, and Matrix Factorization (MF) techniques in particular, are widely used in recommender systems. However, the natural data sparsity problem limits their performance where users generally interact with very few items in the system. Consequently, multiple hybrid models were proposed recently to optimize MF performance by incorporating additional contextual information in its learning process. Although these models improve the recommendation quality, there are two primary aspects for further improvements: (1) multiple models focus only on some portion of the available contextual information and neglect other portions; (2) learning the feature space of the side contextual information needs to be further enhanced. In this paper, we introduce a Collaborative Dual Attentive Autoencoder (CATA++) for recommending scientific articles. CATA++ utilizes an article's content and learns its latent space via two parallel autoencoders. We employ the attention mechanism to capture the most related parts of information in order to make more relevant recommendations. Extensive experiments on three real-world datasets have shown that our dual-way learning strategy has significantly improved the MF performance in comparison with other state-of-the-art MF-based models using various experimental evaluations. The source code of our methods is available at: https://github.com/jianlin-cheng/CATA.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源