论文标题

铁路:迈向利用关系特征的归纳链接预测在知识图中

RAILD: Towards Leveraging Relation Features for Inductive Link Prediction In Knowledge Graphs

论文作者

Gesese, Genet Asefa, Sack, Harald, Alam, Mehwish

论文摘要

由于开放世界的假设,知识图(kgs)永远不会完成。为了解决此问题,到目前为止,提出了各种链接预测(LP)方法。其中一些方法是归纳性LP模型,能够为训练过程中未见的实体学习表征。但是,据我们所知,现有的归纳LP模型都不专注于看不见的关系的学习表征。在这项工作中,提出了一个新颖的关系感知诱导链接预测(Raild),以完成KG完成,以了解看不见的实体和看不见的关系的代表。除了利用与实体和关系相关的文本文字外,Raild还引入了一种基于图形的新方法来生成关系的特征。实验是通过不同的现有和新创建的具有挑战性的基准数据集进行的,结果表明Raild会导致对最新模型的性能提高。此外,由于没有现有的归纳LP模型来学习看不见关系的表示形式,因此我们创建了自己的基线,而使用Raild获得的结果也优于这些基线。

Due to the open world assumption, Knowledge Graphs (KGs) are never complete. In order to address this issue, various Link Prediction (LP) methods are proposed so far. Some of these methods are inductive LP models which are capable of learning representations for entities not seen during training. However, to the best of our knowledge, none of the existing inductive LP models focus on learning representations for unseen relations. In this work, a novel Relation Aware Inductive Link preDiction (RAILD) is proposed for KG completion which learns representations for both unseen entities and unseen relations. In addition to leveraging textual literals associated with both entities and relations by employing language models, RAILD also introduces a novel graph-based approach to generate features for relations. Experiments are conducted with different existing and newly created challenging benchmark datasets and the results indicate that RAILD leads to performance improvement over the state-of-the-art models. Moreover, since there are no existing inductive LP models which learn representations for unseen relations, we have created our own baselines and the results obtained with RAILD also outperform these baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源