论文标题

关于值得信赖的图形神经网络的全面调查:隐私,鲁棒性,公平性和解释性

A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability

论文作者

Dai, Enyan, Zhao, Tianxiang, Zhu, Huaisheng, Xu, Junjie, Guo, Zhimeng, Liu, Hui, Tang, Jiliang, Wang, Suhang

论文摘要

近年来,图形神经网络(GNN)取得了迅速的发展。由于它们在制定图形结构数据的建模能力上,GNN在各种应用程序中大量使用,包括财务分析,交通预测和药物发现等高风险场景。尽管他们在现实世界中具有很大的潜力,但最近的研究表明,GNN可能会泄漏私人信息,容易受到对抗性攻击的影响,可以从培训数据中继承和放大社会偏见,并且缺乏可解释性,这有可能对用户和社会造成无意识的伤害。例如,现有的作品表明,攻击者可以欺骗GNN,以使他们想要的结果在训练图上不明显。在社交网络上培训的GNN可以将歧视嵌入其决策过程中,从而加强了不良的社会偏见。因此,在各个方面都有值得信赖的GNN正在出现,以防止GNN模型受到伤害并增加用户对GNN的信任。在本文中,我们在隐私,鲁棒性,公平性和解释性的计算方面进行了全面的GNN调查。对于每个方面,我们给出了相关方法的分类法,并为可信赖的GNN的多个类别制定了一般框架。我们还讨论了各个方面的未来研究方向以及这些方面之间的联系,以帮助实现信任度。

Graph Neural Networks (GNNs) have made rapid developments in the recent years. Due to their great ability in modeling graph-structured data, GNNs are vastly used in various applications, including high-stakes scenarios such as financial analysis, traffic predictions, and drug discovery. Despite their great potential in benefiting humans in the real world, recent study shows that GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data and lack interpretability, which have risk of causing unintentional harm to the users and society. For example, existing works demonstrate that attackers can fool the GNNs to give the outcome they desire with unnoticeable perturbation on training graph. GNNs trained on social networks may embed the discrimination in their decision process, strengthening the undesirable societal bias. Consequently, trustworthy GNNs in various aspects are emerging to prevent the harm from GNN models and increase the users' trust in GNNs. In this paper, we give a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability. For each aspect, we give the taxonomy of the related methods and formulate the general frameworks for the multiple categories of trustworthy GNNs. We also discuss the future research directions of each aspect and connections between these aspects to help achieve trustworthiness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源