论文标题
分析以数据为中心的图形对比度学习的属性
Analyzing Data-Centric Properties for Graph Contrastive Learning
论文作者
论文摘要
对自我监督学习(SSL)的最新分析发现以下以数据为中心的属性对于学习良好的表示至关重要:对任务 - 无关紧要的语义的不变性,在某些潜在空间中的类别可分离性以及从增强样品中可恢复标签的类别。但是,鉴于它们的离散,非欧成功的性质,图形数据集和图SSL方法不太可能满足这些属性。这就提出了一个问题:诸如对比度学习(CL)之类的图形SSL方法如何运行良好?为了系统地探讨这个问题,我们在使用通用图扩展(GGA)(GGAS)时对CL进行概括分析,重点是以数据为中心的属性。我们的分析对GGA的局限性以及与任务相关的增强的必要性产生了正式见解。正如我们经验表明的那样,GGA不会在共同基准数据集上引起与任务相关的不变性,这只会导致对天真的,未经训练的基线的边际收益。我们的理论激发了一个合成数据生成过程,该过程能够控制与任务相关的信息并拥有预定义的最佳增强。这种灵活的基准测试有助于我们确定高级增强技术(例如自动化方法)中未认可的限制。总体而言,我们的工作在经验和理论上都严格地将以数据为中心的属性对Graph SSL的增强策略和学习范式的影响。
Recent analyses of self-supervised learning (SSL) find the following data-centric properties to be critical for learning good representations: invariance to task-irrelevant semantics, separability of classes in some latent space, and recoverability of labels from augmented samples. However, given their discrete, non-Euclidean nature, graph datasets and graph SSL methods are unlikely to satisfy these properties. This raises the question: how do graph SSL methods, such as contrastive learning (CL), work well? To systematically probe this question, we perform a generalization analysis for CL when using generic graph augmentations (GGAs), with a focus on data-centric properties. Our analysis yields formal insights into the limitations of GGAs and the necessity of task-relevant augmentations. As we empirically show, GGAs do not induce task-relevant invariances on common benchmark datasets, leading to only marginal gains over naive, untrained baselines. Our theory motivates a synthetic data generation process that enables control over task-relevant information and boasts pre-defined optimal augmentations. This flexible benchmark helps us identify yet unrecognized limitations in advanced augmentation techniques (e.g., automated methods). Overall, our work rigorously contextualizes, both empirically and theoretically, the effects of data-centric properties on augmentation strategies and learning paradigms for graph SSL.