论文标题

通过图对比度学习,在DP满足的联合设置中缓解表现牺牲

Mitigating the Performance Sacrifice in DP-Satisfied Federated Settings through Graph Contrastive Learning

论文作者

Yang, Haoran, Zhao, Xiangyu, Li, Muyang, Chen, Hongxu, Xu, Guandong

论文摘要

当前,图形学习模型是必不可少的工具,可帮助研究人员探索图形结构化数据。在学术界,使用足够的培训数据在单个设备上优化图形模型是训练功能强大的图形学习模型的典型方法。然而,由于隐私问题,在实际情况下这样做是不可行的。联合学习提供了一种实用方法来通过引入各种隐私保护机制(例如图形边缘上的差异隐私(DP))来解决这一限制。但是,尽管联合图学习中的DP可以确保图中表示的敏感信息的安全性,但通常会导致图形学习模型的性能降低。在本文中,我们研究了如何在图表上实现DP并观察我们的实验性能下降。此外,我们注意到图形边缘上的DP引入了Perturbs图形接近度的噪声,这是图形对比度学习中的图形增强。受到这一点的启发,我们提出了利用图对比度学习,以减轻DP产生的性能下降。在五个广泛使用的基准数据集上使用四个代表性图模型进行的广泛实验表明,对比度学习确实减轻了模型的DP诱导的性能下降。

Currently, graph learning models are indispensable tools to help researchers explore graph-structured data. In academia, using sufficient training data to optimize a graph model on a single device is a typical approach for training a capable graph learning model. Due to privacy concerns, however, it is infeasible to do so in real-world scenarios. Federated learning provides a practical means of addressing this limitation by introducing various privacy-preserving mechanisms, such as differential privacy (DP) on the graph edges. However, although DP in federated graph learning can ensure the security of sensitive information represented in graphs, it usually causes the performance of graph learning models to degrade. In this paper, we investigate how DP can be implemented on graph edges and observe a performance decrease in our experiments. In addition, we note that DP on graph edges introduces noise that perturbs graph proximity, which is one of the graph augmentations in graph contrastive learning. Inspired by this, we propose leveraging graph contrastive learning to alleviate the performance drop resulting from DP. Extensive experiments conducted with four representative graph models on five widely used benchmark datasets show that contrastive learning indeed alleviates the models' DP-induced performance drops.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源