论文标题

SIMCPSR:纸提交建议系统的简单对比度学习

SimCPSR: Simple Contrastive Learning for Paper Submission Recommendation System

论文作者

Le, Duc H., Doan, Tram T., Huynh, Son T., Nguyen, Binh T.

论文摘要

推荐系统在许多领域,尤其是学术领域都起着至关重要的作用,以支持研究人员通过会议或期刊选择过程提交和增加对工作的接受。这项研究提出了一种基于变压器的模型,使用转移学习作为纸张提交建议系统的有效方法。通过将基本信息(例如标题,摘要和关键字列表)与期刊的目的和范围相结合,该模型可以推荐最大程度地接受论文接受的顶级K期刊。我们的模型通过两个状态开发:(i)使用简单的对比度学习框架对预训练的语言模型(LM)进行微调。我们利用一个简单的监督对比目标来微调所有参数,鼓励LM有效地学习文档表示。 (ii)随后,对下游任务的特征组合进行了微调LM的培训。这项研究提出了一种更高级的方法,可以与以前的方法相比,在我们分别实现前1、3、3、5和10的10个精确度的测试集合集合标题,摘要和关键字作为输入功能的测试集上的前1、1、3、5和10个精确度时,我们分别实现了0.5173、0.8097、0.8862、0.9496,以提高纸张提交建议系统的效率。结合了期刊的目标和范围,我们的模型通过获得0.5194、0.8112、0.8866和0.9496的模型表现出令人兴奋的结果,分别是前1、3、5和10的顶部。

The recommendation system plays a vital role in many areas, especially academic fields, to support researchers in submitting and increasing the acceptance of their work through the conference or journal selection process. This study proposes a transformer-based model using transfer learning as an efficient approach for the paper submission recommendation system. By combining essential information (such as the title, the abstract, and the list of keywords) with the aims and scopes of journals, the model can recommend the Top K journals that maximize the acceptance of the paper. Our model had developed through two states: (i) Fine-tuning the pre-trained language model (LM) with a simple contrastive learning framework. We utilized a simple supervised contrastive objective to fine-tune all parameters, encouraging the LM to learn the document representation effectively. (ii) The fine-tuned LM was then trained on different combinations of the features for the downstream task. This study suggests a more advanced method for enhancing the efficiency of the paper submission recommendation system compared to previous approaches when we respectively achieve 0.5173, 0.8097, 0.8862, 0.9496 for Top 1, 3, 5, and 10 accuracies on the test set for combining the title, abstract, and keywords as input features. Incorporating the journals' aims and scopes, our model shows an exciting result by getting 0.5194, 0.8112, 0.8866, and 0.9496 respective to Top 1, 3, 5, and 10.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源