论文标题
相关器:知识图表示的关系图形变压器
Relphormer: Relational Graph Transformer for Knowledge Graph Representations
论文作者
论文摘要
变形金刚在广泛的领域取得了出色的性能,包括自然语言处理,计算机视觉和图形挖掘。但是,香草变压器体系结构并未在知识图(kg)表示中产生有希望的改进,在该表示的情况下,转化距离范式主导着该区域。请注意,香草变压器架构难以捕获知识图的本质上异质的结构和语义信息。为此,我们为知识图表示称为相关词的新变形器提出了一种新的变压器变体。具体而言,我们介绍了Triple2Seq,该2seq可以动态采样上下文化的子图序列,以减轻异质性问题的输入。我们提出了一种新颖的结构增强的自我发起机制,以编码关系信息并将语义信息保留在实体和关系中。此外,我们利用蒙版的知识建模来进行通用知识图表学习,可以应用于基于KG的各种任务,包括知识图完成,问题答案和建议。六个数据集的实验结果表明,与基准相比,相对符号可以获得更好的性能。代码可在https://github.com/zjunlp/relphormer中找到。
Transformers have achieved remarkable performance in widespread fields, including natural language processing, computer vision and graph mining. However, vanilla Transformer architectures have not yielded promising improvements in the Knowledge Graph (KG) representations, where the translational distance paradigm dominates this area. Note that vanilla Transformer architectures struggle to capture the intrinsically heterogeneous structural and semantic information of knowledge graphs. To this end, we propose a new variant of Transformer for knowledge graph representations dubbed Relphormer. Specifically, we introduce Triple2Seq which can dynamically sample contextualized sub-graph sequences as the input to alleviate the heterogeneity issue. We propose a novel structure-enhanced self-attention mechanism to encode the relational information and keep the semantic information within entities and relations. Moreover, we utilize masked knowledge modeling for general knowledge graph representation learning, which can be applied to various KG-based tasks including knowledge graph completion, question answering, and recommendation. Experimental results on six datasets show that Relphormer can obtain better performance compared with baselines. Code is available in https://github.com/zjunlp/Relphormer.