论文标题

通过简单的神经端到端实体链接调查BERT中的实体知识

Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking

论文作者

Broscheit, Samuel

论文摘要

端到端实体链接系统的典型架构包括三个步骤:提及检测,候选生成和实体歧义。在这项研究中,我们调查了以下问题:(a)是否可以使用上下文化的文本代理的模型共同学习所有这些步骤,即Bert(Devlin等,2019)? (b)预验证的伯特已经包含了多少实体知识? (c)其他实体知识是否可以改善BERT在下游任务中的表现?为此,我们提出了一个非常简单的链接设置的极端简化,该设置出乎意料的是:只需将其作为per token分类对整个实体词汇进行(在我们的情况下超过70万个类)。我们在一个链接基准的实体上表明,(i)此模型改善了通过普通伯特(Plain Bert)的实体表示形式,(ii)表明,它优于链接架构的实体,该实体分别优化任务,而(iii)仅次于当前的先进目前的目前,该实体确实提到了检测和实体依据的依据。此外,我们研究了实体感知的令牌代表在文本基准胶水中的有用性,以及回答基准Squad V2和SWAG的问题,以及EN-DE WMT14机器翻译基准。令我们惊讶的是,我们发现,这些基准中的大多数并不能从其他实体知识中受益,除了具有很小的培训数据的任务(胶水中的RTE任务)提高了2%。

A typical architecture for end-to-end entity linking systems consists of three steps: mention detection, candidate generation and entity disambiguation. In this study we investigate the following questions: (a) Can all those steps be learned jointly with a model for contextualized text-representations, i.e. BERT (Devlin et al., 2019)? (b) How much entity knowledge is already contained in pretrained BERT? (c) Does additional entity knowledge improve BERT's performance in downstream tasks? To this end, we propose an extreme simplification of the entity linking setup that works surprisingly well: simply cast it as a per token classification over the entire entity vocabulary (over 700K classes in our case). We show on an entity linking benchmark that (i) this model improves the entity representations over plain BERT, (ii) that it outperforms entity linking architectures that optimize the tasks separately and (iii) that it only comes second to the current state-of-the-art that does mention detection and entity disambiguation jointly. Additionally, we investigate the usefulness of entity-aware token-representations in the text-understanding benchmark GLUE, as well as the question answering benchmarks SQUAD V2 and SWAG and also the EN-DE WMT14 machine translation benchmark. To our surprise, we find that most of those benchmarks do not benefit from additional entity knowledge, except for a task with very small training data, the RTE task in GLUE, which improves by 2%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源