论文标题

统一图形学习与灵活的上下文范围

Unifying Graph Contrastive Learning with Flexible Contextual Scopes

论文作者

Zheng, Yizhen, Zheng, Yu, Zhou, Xiaofei, Gong, Chen, Lee, Vincent CS, Pan, Shirui

论文摘要

图对比度学习(GCL)最近已成为一种有效的学习范式,以减轻对图表表示学习的标记信息的依赖。 GCL的核心是最大化节点的表示及其上下文表示(即具有相似语义信息的相应实例)之间的相互信息,从上下文范围(例如,整个图形或1跳社区)总结了。该方案会使有价值的自主诉讼信号用于GCL培训。但是,现有的GCL方法仍然存在局限性,例如为不同数据集选择合适的上下文范围和建立有偏见的对比度的不便。为了解决上述问题,我们提出了一种简单的自我监督学习方法,称为统一图形学习,并具有灵活的上下文范围(简称UGCL)。我们的算法通过控制邻接矩阵的功率来构建具有可调上下文范围的灵活上下文表示。此外,我们的方法确保对比度建立在连接的组件中,以减少上下文表示的偏见。基于本地和上下文范围的表示,UGCL优化了图形表示学习的非常简单的对比损失函数。本质上,UGCL的体系结构可以视为统一现有GCL方法的一般框架。与自我监督的图表学习基线相比,我们已经进行了密集的实验,并在八个基准数据集中的六个基准数据集中实现了新的最先进的性能。我们的代码已经开源。

Graph contrastive learning (GCL) has recently emerged as an effective learning paradigm to alleviate the reliance on labelling information for graph representation learning. The core of GCL is to maximise the mutual information between the representation of a node and its contextual representation (i.e., the corresponding instance with similar semantic information) summarised from the contextual scope (e.g., the whole graph or 1-hop neighbourhood). This scheme distils valuable self-supervision signals for GCL training. However, existing GCL methods still suffer from limitations, such as the incapacity or inconvenience in choosing a suitable contextual scope for different datasets and building biased contrastiveness. To address aforementioned problems, we present a simple self-supervised learning method termed Unifying Graph Contrastive Learning with Flexible Contextual Scopes (UGCL for short). Our algorithm builds flexible contextual representations with tunable contextual scopes by controlling the power of an adjacency matrix. Additionally, our method ensures contrastiveness is built within connected components to reduce the bias of contextual representations. Based on representations from both local and contextual scopes, UGCL optimises a very simple contrastive loss function for graph representation learning. Essentially, the architecture of UGCL can be considered as a general framework to unify existing GCL methods. We have conducted intensive experiments and achieved new state-of-the-art performance in six out of eight benchmark datasets compared with self-supervised graph representation learning baselines. Our code has been open-sourced.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源