论文标题

欺骗图神经网络的单节点攻击

Single-Node Attacks for Fooling Graph Neural Networks

论文作者

Finkelshtein, Ben, Baskin, Chaim, Zheltonozhskii, Evgenii, Alon, Uri

论文摘要

图神经网络(GNN)在各种领域显示出广泛的适用性。这些领域,例如社交网络和产品建议,是恶意使用者和行为的肥沃基础。在本文中,我们表明,GNN容易受到单节点对抗攻击极为有限的(而且非常现实)的情况,在这种情况下,攻击者无法选择扰动的节点。也就是说,攻击者可以通过仅稍微扰动图表中另一个任意节点的功能或邻居列表来迫使GNN将任何目标节点分类为所选标签,即使无法选择该特定攻击者节点。当允许对手选择攻击者节点时,这些攻击更加有效。我们从经验上证明,我们的攻击在各种常见的GNN类型(例如GCN,Graphsage,GAT,Gin)中有效,并且对GNN(例如,强大的GCN,SM GCN,GCN,GAL,LAT-GCN),超越了现实世界中的不同现实数据集中的攻击和非targe虫的攻击。我们的代码可在https://github.com/benfinkelshtein/single上找到。

Graph neural networks (GNNs) have shown broad applicability in a variety of domains. These domains, e.g., social networks and product recommendations, are fertile ground for malicious users and behavior. In this paper, we show that GNNs are vulnerable to the extremely limited (and thus quite realistic) scenarios of a single-node adversarial attack, where the perturbed node cannot be chosen by the attacker. That is, an attacker can force the GNN to classify any target node to a chosen label, by only slightly perturbing the features or the neighbor list of another single arbitrary node in the graph, even when not being able to select that specific attacker node. When the adversary is allowed to select the attacker node, these attacks are even more effective. We demonstrate empirically that our attack is effective across various common GNN types (e.g., GCN, GraphSAGE, GAT, GIN) and robustly optimized GNNs (e.g., Robust GCN, SM GCN, GAL, LAT-GCN), outperforming previous attacks across different real-world datasets both in a targeted and non-targeted attacks. Our code is available at https://github.com/benfinkelshtein/SINGLE .

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源