论文标题

用动态结构修剪图生成图形

Graph-to-Text Generation with Dynamic Structure Pruning

论文作者

Li, Liang, Geng, Ruiying, Li, Bowen, Ma, Can, Yue, Yinliang, Li, Binhua, Li, Yongbin

论文摘要

大多数图形之间的作品都建立在具有跨注意机制的编码器框架上。最近的研究表明,对输入图结构进行明确建模可以显着提高性能。但是,香草结构编码器无法在所有解码步骤中捕获单个前向通过中的所有专业信息,从而导致语义表示不准确。同时,输入图在交叉注意中作为无序序列被扁平,忽略了原始图形结构。结果,解码器中获得的输入图上下文向量可能存在缺陷。为了解决这些问题,我们提出了一种结构感知的跨意识(SACA)机制,以在每个解码步骤中以结构意识的方式重新编码在新生成的上下文上的输入图表示条件。我们进一步调整SACA并引入其变体动态图修剪(DGP)机制,以在解码过程中动态降低无关的节点。我们在两个图表数据集(LDC2020T02和ENT-DESC)上实现了新的最新结果,但计算成本仅略有增加。

Most graph-to-text works are built on the encoder-decoder framework with cross-attention mechanism. Recent studies have shown that explicitly modeling the input graph structure can significantly improve the performance. However, the vanilla structural encoder cannot capture all specialized information in a single forward pass for all decoding steps, resulting in inaccurate semantic representations. Meanwhile, the input graph is flatted as an unordered sequence in the cross attention, ignoring the original graph structure. As a result, the obtained input graph context vector in the decoder may be flawed. To address these issues, we propose a Structure-Aware Cross-Attention (SACA) mechanism to re-encode the input graph representation conditioning on the newly generated context at each decoding step in a structure aware manner. We further adapt SACA and introduce its variant Dynamic Graph Pruning (DGP) mechanism to dynamically drop irrelevant nodes in the decoding process. We achieve new state-of-the-art results on two graph-to-text datasets, LDC2020T02 and ENT-DESC, with only minor increase on computational cost.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源