论文标题

KESA:一种知识增强的情感分析方法

KESA: A Knowledge Enhanced Approach For Sentiment Analysis

论文作者

Zhao, Qinghua, Ma, Shuai, Ren, Shuo

论文摘要

尽管最近的一些作品着重于将情感知识注入前训练的语言模型,但它们通常在训练阶段设计掩盖和重建任务。在本文中,我们旨在以更轻松的方式从情感知识中受益。为了实现这一目标,我们研究句子级别的情绪分析,并相应地提出了两个情感 - 辅助任务,名为“情感” cloze和条件情感预测。鉴于总体情感极性为先验知识,第一个任务学会了在输入中选择正确的情感单词。相反,鉴于单词的情感极性为先验知识,第二个任务预测了整体情感极性。此外,研究了两种标签组合方法,以在每个任务中统一多种类型的标签。我们认为,更多的信息可以促进模型以学习更深刻的语义表示。我们以一种直接的方式实施它来验证这一假设。实验结果表明,我们的方法始终优于预训练的模型,并且是现有知识增强训练的后训练后模型。代码和数据在https://github.com/lshowway/kesa上发布。

Though some recent works focus on injecting sentiment knowledge into pre-trained language models, they usually design mask and reconstruction tasks in the post-training phase. In this paper, we aim to benefit from sentiment knowledge in a lighter way. To achieve this goal, we study sentence-level sentiment analysis and, correspondingly, propose two sentiment-aware auxiliary tasks named sentiment word cloze and conditional sentiment prediction. The first task learns to select the correct sentiment words within the input, given the overall sentiment polarity as prior knowledge. On the contrary, the second task predicts the overall sentiment polarity given the sentiment polarity of the word as prior knowledge. In addition, two kinds of label combination methods are investigated to unify multiple types of labels in each task. We argue that more information can promote the models to learn more profound semantic representation. We implement it in a straightforward way to verify this hypothesis. The experimental results demonstrate that our approach consistently outperforms pre-trained models and is additive to existing knowledge-enhanced post-trained models. The code and data are released at https://github.com/lshowway/KESA.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源