论文标题
对大规模图的对抗性攻击
Adversarial Attack on Large Scale Graph
论文作者
论文摘要
最近的研究表明,图形神经网络(GNN)由于缺乏稳健性而容易受到扰动的影响,因此很容易被愚弄。当前,大多数攻击GNN的工作主要使用梯度信息来指导攻击并取得出色的性能。但是,时间和空间的高复杂性使它们在大型图表上无法控制,并成为阻止实际用途的主要瓶颈。我们认为,主要原因是他们必须使用整个图进行攻击,从而导致随着数据量表的增长而增加的时间和空间复杂性。在这项工作中,我们提出了一种有效的简化基于梯度的攻击(SGA)方法来弥合此差距。 SGA可以通过多阶段攻击框架误导GNN误解了特定的目标节点,该攻击框架只需要一个小得多的子图。此外,我们提出了一个实用的指标度量分类变化(DAC),以衡量对抗性攻击对图形数据的影响。我们通过攻击几个常用的GNN来评估对四个现实世界图网络的攻击方法。实验结果表明,与最新的攻击技术相比,SGA可以实现大量的时间和记忆效率,同时保持竞争性攻击性能。代码可通过:https://github.com/edisonleeeeee/sgattack获得。
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness and can therefore be easily fooled. Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance. However, the high complexity of time and space makes them unmanageable for large scale graphs and becomes the major bottleneck that prevents the practical usage. We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows. In this work, we propose an efficient Simplified Gradient-based Attack (SGA) method to bridge this gap. SGA can cause the GNNs to misclassify specific target nodes through a multi-stage attack framework, which needs only a much smaller subgraph. In addition, we present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data. We evaluate our attack method on four real-world graph networks by attacking several commonly used GNNs. The experimental results demonstrate that SGA can achieve significant time and memory efficiency improvements while maintaining competitive attack performance compared to state-of-art attack techniques. Codes are available via: https://github.com/EdisonLeeeee/SGAttack.