论文标题
图形神经网络的全球反事实解释器
Global Counterfactual Explainer for Graph Neural Networks
论文作者
论文摘要
图形神经网络(GNNS)在各个领域中找到应用程序,例如计算生物学,自然语言处理和计算机安全。由于其受欢迎程度,由于GNN是黑盒机器学习模型,因此越来越需要解释GNN预测。解决此问题的一种方法是反事实推理,目的是通过最小变化在输入图中更改GNN预测。现有的反事实解释方法的方法仅限于特定实例的局部推理。这种方法有两个主要局限性,即无法提供全球追索权政策,并使人类认知能力过多的信息过多。在这项工作中,我们通过全球反事实推理研究了GNN的全球解释性。具体而言,我们希望找到一组代表性的反事实图,以解释所有输入图。为了实现这一目标,我们提出了GCFExplainer,这是一种新颖的算法,由顶点增强的随机步行在图形的编辑地图上,并带有贪婪的摘要。在真实图形数据集上进行的广泛实验表明,GCFExplainer的全局解释提供了对模型行为的重要高级见解,并且与最先进的本地反事实辩论者相比,在追索性覆盖范围中获得了46.9%的补充,在诉讼成本中降低了9.5%。
Graph neural networks (GNNs) find applications in various domains such as computational biology, natural language processing, and computer security. Owing to their popularity, there is an increasing need to explain GNN predictions since GNNs are black-box machine learning models. One way to address this is counterfactual reasoning where the objective is to change the GNN prediction by minimal changes in the input graph. Existing methods for counterfactual explanation of GNNs are limited to instance-specific local reasoning. This approach has two major limitations of not being able to offer global recourse policies and overloading human cognitive ability with too much information. In this work, we study the global explainability of GNNs through global counterfactual reasoning. Specifically, we want to find a small set of representative counterfactual graphs that explains all input graphs. Towards this goal, we propose GCFExplainer, a novel algorithm powered by vertex-reinforced random walks on an edit map of graphs with a greedy summary. Extensive experiments on real graph datasets show that the global explanation from GCFExplainer provides important high-level insights of the model behavior and achieves a 46.9% gain in recourse coverage and a 9.5% reduction in recourse cost compared to the state-of-the-art local counterfactual explainers.