论文标题
神经子图探险家:通过面向目标的语法图减少嘈杂信息
Neural Subgraph Explorer: Reducing Noisy Information via Target-Oriented Syntax Graph Pruning
论文作者
论文摘要
近年来见证了利用语法图为目标情感分类任务的新兴成功。但是,我们发现现有的基于语法的模型遇到了两个问题:嘈杂的信息聚集和遥远相关性的丧失。在本文中,我们提出了一种称为神经子图探险家的新型模型,该模型通过语法图上的修剪目标 - iRrelevant节点减少了嘈杂的信息; (2)将目标及其相关词之间的有益的一阶连接引入获得的图。具体而言,我们设计了一个多跳动动作分数估计器,以评估每个单词有关特定目标的价值。通过Gumble-Softmax对离散的动作序列进行采样,然后用于语法图和自我发项图。为了引入目标与其相关词之间的一阶连接,合并了两个修剪图的图。最后,在获得的统一图上进行图形卷积以更新隐藏状态。此过程堆叠了多层。据我们所知,这是该任务中面向目标的语法图的首次尝试。实验结果证明了我们的模型的优势,该模型实现了新的最新性能。
Recent years have witnessed the emerging success of leveraging syntax graphs for the target sentiment classification task. However, we discover that existing syntax-based models suffer from two issues: noisy information aggregation and loss of distant correlations. In this paper, we propose a novel model termed Neural Subgraph Explorer, which (1) reduces the noisy information via pruning target-irrelevant nodes on the syntax graph; (2) introduces beneficial first-order connections between the target and its related words into the obtained graph. Specifically, we design a multi-hop actions score estimator to evaluate the value of each word regarding the specific target. The discrete action sequence is sampled through Gumble-Softmax and then used for both of the syntax graph and the self-attention graph. To introduce the first-order connections between the target and its relevant words, the two pruned graphs are merged. Finally, graph convolution is conducted on the obtained unified graph to update the hidden states. And this process is stacked with multiple layers. To our knowledge, this is the first attempt of target-oriented syntax graph pruning in this task. Experimental results demonstrate the superiority of our model, which achieves new state-of-the-art performance.