论文标题

伯特对书籍,电影和音乐有何了解?探测BERT以进行会话建议

What does BERT know about books, movies and music? Probing BERT for Conversational Recommendation

论文作者

Penha, Gustavo, Hauff, Claudia

论文摘要

诸如BERT之类的大量预训练的变压器模型最近证明,通过在众多下游任务上取得令人印象深刻的结果,在语言建模方面非常强大。还表明,他们能够在预训练后将其参数隐含地存储在其参数中。了解LMS实际学习的前训练程序是使用和改进它们以进行对话推荐系统(CRS)的关键步骤。我们首先研究了有关建议项目,例如书籍,电影和音乐等推荐项目的现成的预先培训的伯特“知道”。为了分析BERT参数中存储的知识,我们使用需要不同类型的知识来解决的不同探针,即基于内容和基于协作的知识。基于内容的知识是需要模型将项目标题与其内容信息(例如文本描述和流派)相匹配的知识。相比之下,根据社区互动(例如评级),基于协作的知识要求该模型与类似项目匹配。我们求助于Bert的蒙版语言建模头,并使用紧缩风格提示来探究其对物品类型的知识。此外,我们采用BERT的下一个句子预测负责人和表示形式的相似性来比较相关和非相关搜索和建议查询文档输入,以探索Bert是否可以首先排名相关的项目,而无需任何细微调整。最后,我们研究BERT在下游任务的对话推荐中的表现。总体而言,我们的分析和实验表明:(i)伯特在其参数中存储了有关书籍,电影和音乐内容的知识; (ii)与基于协作的知识相比,它具有更多基于内容的知识; (iii)面对对抗数据时,会议推荐失败。

Heavily pre-trained transformer models such as BERT have recently shown to be remarkably powerful at language modelling by achieving impressive results on numerous downstream tasks. It has also been shown that they are able to implicitly store factual knowledge in their parameters after pre-training. Understanding what the pre-training procedure of LMs actually learns is a crucial step for using and improving them for Conversational Recommender Systems (CRS). We first study how much off-the-shelf pre-trained BERT "knows" about recommendation items such as books, movies and music. In order to analyze the knowledge stored in BERT's parameters, we use different probes that require different types of knowledge to solve, namely content-based and collaborative-based. Content-based knowledge is knowledge that requires the model to match the titles of items with their content information, such as textual descriptions and genres. In contrast, collaborative-based knowledge requires the model to match items with similar ones, according to community interactions such as ratings. We resort to BERT's Masked Language Modelling head to probe its knowledge about the genre of items, with cloze style prompts. In addition, we employ BERT's Next Sentence Prediction head and representations' similarity to compare relevant and non-relevant search and recommendation query-document inputs to explore whether BERT can, without any fine-tuning, rank relevant items first. Finally, we study how BERT performs in a conversational recommendation downstream task. Overall, our analyses and experiments show that: (i) BERT has knowledge stored in its parameters about the content of books, movies and music; (ii) it has more content-based knowledge than collaborative-based knowledge; and (iii) fails on conversational recommendation when faced with adversarial data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源