论文标题

基于特征的解释的斗争:沙普利值与最小足够的子集

The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets

论文作者

Camburu, Oana-Maria, Giunchiglia, Eleonora, Foerster, Jakob, Lukasiewicz, Thomas, Blunsom, Phil

论文摘要

为了使神经模型获得广泛的公众信任并确保公平性,我们必须对其预测有足够的解释。最近,越来越多的作品集中在解释因输入特征的相关性方面解释神经模型的预测。在这项工作中,我们显示了基于功能的解释甚至用于解释琐碎模型的问题。我们表明,在某些情况下,至少存在两个基于基础特征的解释,有时,它们都不足以完全了解模型的决策过程。此外,我们表明,尽管显然是隐含的假设,即解释者应该寻找一种基于特定特征的解释,但两种流行的解释器,Shapley解释器和最少的足够子集解释器,基本上是不同类型的基本真相解释。这些发现为开发和选择解释者带来了一个额外的维度。

For neural models to garner widespread public trust and ensure fairness, we must have human-intelligible explanations for their predictions. Recently, an increasing number of works focus on explaining the predictions of neural models in terms of the relevance of the input features. In this work, we show that feature-based explanations pose problems even for explaining trivial models. We show that, in certain cases, there exist at least two ground-truth feature-based explanations, and that, sometimes, neither of them is enough to provide a complete view of the decision-making process of the model. Moreover, we show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations, despite the apparently implicit assumption that explainers should look for one specific feature-based explanation. These findings bring an additional dimension to consider in both developing and choosing explainers.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源