论文标题
通过跨视图重建的图形对比度学习
Graph Contrastive Learning with Cross-view Reconstruction
论文作者
论文摘要
在不同现有的自我监督学习策略中,图对比度学习(GCL)一直是解决此问题的最普遍方法之一。尽管这些GCL方法取得了显着性,但在很大程度上取决于各种手动设计的增强技术的现有GCL方法仍然难以减轻功能抑制问题,而不会冒失去与任务相关的信息的风险。因此,学到的表示形式要么是脆弱的,要么是不耗尽的。鉴于此,我们使用跨视图重建(GraphCV)介绍了图形对比度学习,该学习遵循信息瓶颈原理以从图形数据中学习最小而充分的表示。具体而言,GraphCV旨在分别引起预测性(可用于下游实例歧视)和其他非预测性特征。除了传统的对比损失外,还保证了跨不同的增强观点的代表性的一致性和充分性外,我们还引入了跨视图重建机制,以追求两种学识渊博的表示。此外,从原始视图中造成的对抗性视图被添加为对比度损失的第三种观点,以确保全球语义的完整性并改善表示的鲁棒性。我们从经验上证明,我们所提出的模型在多个基准数据集上优于图形分类任务的最新模型。
Among different existing graph self-supervised learning strategies, graph contrastive learning (GCL) has been one of the most prevalent approaches to this problem. Despite the remarkable performance those GCL methods have achieved, existing GCL methods that heavily depend on various manually designed augmentation techniques still struggle to alleviate the feature suppression issue without risking losing task-relevant information. Consequently, the learned representation is either brittle or unilluminating. In light of this, we introduce the Graph Contrastive Learning with Cross-View Reconstruction (GraphCV), which follows the information bottleneck principle to learn minimal yet sufficient representation from graph data. Specifically, GraphCV aims to elicit the predictive (useful for downstream instance discrimination) and other non-predictive features separately. Except for the conventional contrastive loss which guarantees the consistency and sufficiency of the representation across different augmentation views, we introduce a cross-view reconstruction mechanism to pursue the disentanglement of the two learned representations. Besides, an adversarial view perturbed from the original view is added as the third view for the contrastive loss to guarantee the intactness of the global semantics and improve the representation robustness. We empirically demonstrate that our proposed model outperforms the state-of-the-art on graph classification task over multiple benchmark datasets.