论文标题

图形神经网络的增强因果解释器

Reinforced Causal Explainer for Graph Neural Networks

论文作者

Wang, Xiang, Wu, Yingxin, Zhang, An, Feng, Fuli, He, Xiangnan, Chua, Tat-Seng

论文摘要

解释性对于探测图形神经网络(GNN)至关重要,回答了诸如“为什么GNN模型做出一定预测?”之类的问题。特征归因是一种突出输入图中的解释子图的普遍技术,该技术可能会导致GNN模型进行预测。各种归因方法利用梯度样或注意力分数作为边缘的属性,然后选择具有最高归因得分的显着边缘作为解释。但是,这些作品中的大多数都是站不住脚的假设 - 所选边缘是线性独立的 - 因此在边缘基本上没有探索的边缘之间,尤其是其联盟效应。我们证明了这一假设的明确缺点 - 使解释性子图不忠和冗长。为了应对这一挑战,我们提出了增强的学习推动者,加强因果解释者(RC-解释器)。它将解释任务框起来是一个顺序的决策过程 - 通过添加明显的边缘来连接先前选择的子图,可以连续构建解释子图。从技术上讲,其策略网络预测了边缘加法的动作,并获得了奖励,以量化动作对预测的因果影响。这种奖励说明了新添加的边缘和先前添加的边缘的依赖性,从而反映了它们是否共同合作并组成了一个联盟以追求更好的解释。因此,RC-解释器能够产生忠实而简洁的解释,并具有更好的概括能力来看不见图形。在解释三个图形分类数据集上的不同GNN时,RC-解释器与SOTA方法更好或可比的性能W.R.T.预测精度和对比度,并安全地通过理智检查和视觉检查。代码可在https://github.com/xiangwang1223/reinforced_causal_explainer获得。

Explainability is crucial for probing graph neural networks (GNNs), answering questions like "Why the GNN model makes a certain prediction?". Feature attribution is a prevalent technique of highlighting the explanatory subgraph in the input graph, which plausibly leads the GNN model to make its prediction. Various attribution methods exploit gradient-like or attention scores as the attributions of edges, then select the salient edges with top attribution scores as the explanation. However, most of these works make an untenable assumption - the selected edges are linearly independent - thus leaving the dependencies among edges largely unexplored, especially their coalition effect. We demonstrate unambiguous drawbacks of this assumption - making the explanatory subgraph unfaithful and verbose. To address this challenge, we propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer). It frames the explanation task as a sequential decision process - an explanatory subgraph is successively constructed by adding a salient edge to connect the previously selected subgraph. Technically, its policy network predicts the action of edge addition, and gets a reward that quantifies the action's causal effect on the prediction. Such reward accounts for the dependency of the newly-added edge and the previously-added edges, thus reflecting whether they collaborate together and form a coalition to pursue better explanations. As such, RC-Explainer is able to generate faithful and concise explanations, and has a better generalization power to unseen graphs. When explaining different GNNs on three graph classification datasets, RC-Explainer achieves better or comparable performance to SOTA approaches w.r.t. predictive accuracy and contrastivity, and safely passes sanity checks and visual inspections. Codes are available at https://github.com/xiangwang1223/reinforced_causal_explainer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源