论文标题

带有实体内存的统一编码器框架

A Unified Encoder-Decoder Framework with Entity Memory

论文作者

Zhang, Zhihan, Yu, Wenhao, Zhu, Chenguang, Jiang, Meng

论文摘要

实体,作为现实世界知识的重要载体,在许多NLP任务中都起着关键作用。我们专注于将实体知识纳入信息文本的编码器框架中。现有的方法试图索引,检索和读取外部文件作为证据,但它们遭受了大量的计算开销。在这项工作中,我们提出了一个带有实体内存的编码器框架,即Edmem。实体知识以潜在表示形式存储在内存中,并且内存与编码器描述器参数一起在Wikipedia上进行了预训练。要精确生成实体名称,我们设计了三种解码方法来通过链接内存中的实体来限制实体的生成。 Edmem是一个统一的框架,可用于各种实体密集的问答和发电任务。广泛的实验结果表明,EDMEM优于基于内存的自动编码器模型和非内存编码器模型。

Entities, as important carriers of real-world knowledge, play a key role in many NLP tasks. We focus on incorporating entity knowledge into an encoder-decoder framework for informative text generation. Existing approaches tried to index, retrieve, and read external documents as evidence, but they suffered from a large computational overhead. In this work, we propose an encoder-decoder framework with an entity memory, namely EDMem. The entity knowledge is stored in the memory as latent representations, and the memory is pre-trained on Wikipedia along with encoder-decoder parameters. To precisely generate entity names, we design three decoding methods to constrain entity generation by linking entities in the memory. EDMem is a unified framework that can be used on various entity-intensive question answering and generation tasks. Extensive experimental results show that EDMem outperforms both memory-based auto-encoder models and non-memory encoder-decoder models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源