论文标题
通过学习概念的逻辑组合,GNN的全球解释性
Global Explainability of GNNs via Logic Combination of Learned Concepts
论文作者
论文摘要
尽管GNN实例级别的解释是一个充分研究的问题,但开发了许多方法,尽管其在解释性和调试中的潜力,但对GNN的行为提供了全球解释。现有解决方案要么简单地列出给定类别的本地解释,要么生成具有最大分数的合成原型图,完全缺少GNN可以学到的任何组合方面。在这项工作中,我们提出了GlgeXplainer(基于全球逻辑的GNN解释器),这是第一个能够生成解释的全球解释器,作为学习图形概念的任意布尔组合。 GlgeXplainer是一种完全可区分的体系结构,可以将本地解释作为输入,并将它们结合到图形概念上,将其结合成逻辑公式,被表示为局部解释的簇。与现有的解决方案相反,GlgeXplainer提供了准确且人性化的全局解释,这些解释与基础真相解释(在合成数据上)或匹配现有域知识(在现实世界数据上)完全一致。提取的公式忠于模型预测,以提供对模型学到的一些不正确规则的见解,从而使GlgeXplainer成为学习gnns的有希望的诊断工具。
While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging. Existing solutions either simply list local explanations for a given class, or generate a synthetic prototypical graph with maximal score for a given class, completely missing any combinatorial aspect that the GNN could have learned. In this work, we propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a fully differentiable architecture that takes local explanations as inputs and combines them into a logic formula over graphical concepts, represented as clusters of local explanations. Contrary to existing solutions, GLGExplainer provides accurate and human-interpretable global explanations that are perfectly aligned with ground-truth explanations (on synthetic data) or match existing domain knowledge (on real-world data). Extracted formulas are faithful to the model predictions, to the point of providing insights into some occasionally incorrect rules learned by the model, making GLGExplainer a promising diagnostic tool for learned GNNs.