论文标题

学习集群模式用于抽象摘要

Learning Cluster Patterns for Abstractive Summarization

论文作者

Jo, Sung-Guk, Kim, Jeong-Jae, On, Byung-Won

论文摘要

如今,诸如Bertsum和Bart之类的预先训练的序列到序列模型已显示出抽象性汇总的最新结果。在这些模型中,在微调过程中,编码器将句子转换为潜在空间中的上下文向量,而解码器根据上下文向量学习了摘要生成任务。在我们的方法中,我们考虑了两个显着和非偏心上下文向量的簇,使用该簇可以使用这些簇来进行更多的分解器,以便更多地与显着的上下文向量进行摘要。为此,我们在编码器和解码器之间提出了一个新颖的聚类变压器层,该层首先生成两个显着和非偏好矢量的簇,然后归一化并缩小簇以使它们在潜在空间中分开。我们的实验结果表明,提出的模型通过学习这些独特的集群模式来优于现有的BART模型,在CNN/Dailymail和Xsum数据集中,胭脂的提高高达4%,Bertscore平均提高了0.3%。

Nowadays, pre-trained sequence-to-sequence models such as BERTSUM and BART have shown state-of-the-art results in abstractive summarization. In these models, during fine-tuning, the encoder transforms sentences to context vectors in the latent space and the decoder learns the summary generation task based on the context vectors. In our approach, we consider two clusters of salient and non-salient context vectors, using which the decoder can attend more to salient context vectors for summary generation. For this, we propose a novel clustering transformer layer between the encoder and the decoder, which first generates two clusters of salient and non-salient vectors, and then normalizes and shrinks the clusters to make them apart in the latent space. Our experimental result shows that the proposed model outperforms the existing BART model by learning these distinct cluster patterns, improving up to 4% in ROUGE and 0.3% in BERTScore on average in CNN/DailyMail and XSUM data sets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源