论文标题

使用加权图Laplacian的强大图形神经网络

Robust Graph Neural Networks using Weighted Graph Laplacian

论文作者

Runwal, Bharat, Vivek, Kumar, Sandeep

论文摘要

图神经网络(GNN)正在在各种应用领域中实现出色的性能。但是,GNN容易受到输入数据中的噪声和对抗性攻击。在噪音和对抗性攻击方面使GNN坚固是一个重要的问题。现有的GNN防御方法在计算上是要求的,并且不可伸缩。在本文中,我们提出了一个通用框架,用于鲁棒化的GNN称为加权Laplacian GNN(RWL-GNN)。该方法将加权图拉普拉斯学习与GNN实现结合在一起。所提出的方法通过制定统一的优化框架,从Laplacian矩阵的正面半定义特性,具有光滑度和潜在特征受益,这确保了对抗/嘈杂的边缘,并适当加权图表中的连接并进行适当加权。为了进行演示,实验是通过图形卷积神经网络(GCNN)体系结构进行的,但是,所提出的框架很容易适合任何现有的GNN体系结构。使用基准数据集的仿真结果建立了所提出方法的疗效,无论是准确性还是计算效率。可以在https://github.com/bharat-runwal/rwl-gnn上访问代码。

Graph neural network (GNN) is achieving remarkable performances in a variety of application domains. However, GNN is vulnerable to noise and adversarial attacks in input data. Making GNN robust against noises and adversarial attacks is an important problem. The existing defense methods for GNNs are computationally demanding and are not scalable. In this paper, we propose a generic framework for robustifying GNN known as Weighted Laplacian GNN (RWL-GNN). The method combines Weighted Graph Laplacian learning with the GNN implementation. The proposed method benefits from the positive semi-definiteness property of Laplacian matrix, feature smoothness, and latent features via formulating a unified optimization framework, which ensures the adversarial/noisy edges are discarded and connections in the graph are appropriately weighted. For demonstration, the experiments are conducted with Graph convolutional neural network(GCNN) architecture, however, the proposed framework is easily amenable to any existing GNN architecture. The simulation results with benchmark dataset establish the efficacy of the proposed method, both in accuracy and computational efficiency. Code can be accessed at https://github.com/Bharat-Runwal/RWL-GNN.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源