论文标题
关于理解和减轻图形对比学习的维度崩溃:一种非最大删除方法
On Understanding and Mitigating the Dimensional Collapse of Graph Contrastive Learning: a Non-Maximum Removal Approach
论文作者
论文摘要
图形对比度学习(GCL)在没有手动注释的情况下显示出图形表示学习(GRL)的有希望的表现。 GCL可以通过最大化同一图的不同增强视图(正对)之间的相互信息(MI)来生成图形嵌入。但是,GCL受尺寸崩溃的限制,即嵌入向量仅占据低维子空间。在本文中,我们表明图形池的平滑效果和图形卷积的隐式正则化是GCL中维圆度的两个原因。为了减轻上述问题,我们提出了一种非最大删除图对比度学习方法(NMRGCL),该方法删除了TEXT任务中的积极对的“突出”维度(即在相似性测量中贡献最大)。在各种基准标记数据集上进行了各种基础数据集中的全面实验,以表现出nmrgg的有效性,并表明了NMRGG的有效性,并表现出了nMrggl的效率,并表明了这一结果,并表明了我们的结果,并表明了这一结果。方法将公开使用。
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations. GCL can generate graph-level embeddings by maximizing the Mutual Information (MI) between different augmented views of the same graph (positive pairs). However, the GCL is limited by dimensional collapse, i.e., embedding vectors only occupy a low-dimensional subspace. In this paper, we show that the smoothing effect of the graph pooling and the implicit regularization of the graph convolution are two causes of the dimensional collapse in GCL. To mitigate the above issue, we propose a non-maximum removal graph contrastive learning approach (nmrGCL), which removes "prominent'' dimensions (i.e., contribute most in similarity measurement) for positive pair in the pre-text task. Comprehensive experiments on various benchmark datasets are conducted to demonstrate the effectiveness of nmrGCL, and the results show that our model outperforms the state-of-the-art methods. Source code will be made publicly available.