论文标题

Shapley Flow:一种基于图的方法来解释模型预测

Shapley Flow: A Graph-based Approach to Interpreting Model Predictions

论文作者

Wang, Jiaxuan, Wiens, Jenna, Lundberg, Scott

论文摘要

许多现有的估计特征重要性方法是有问题的,因为它们忽略或隐藏了功能之间的依赖性。一个因果图编码输入变量之间的关系,可以帮助分配特征重要性。但是,在因果图中为节点分配信用的当前方法无法解释整个图。鉴于这些局限性,我们提出了Shapley Flow,这是一种解释机器学习模型的新方法。它考虑了整个因果图,并将信用分配给\ textit {edges},而不是将节点视为信用分配的基本单位。沙普利流量是将莎普利值公理概括为定向无环图的独特解决方案。我们证明了使用Shapley Flow来推理模型输入对其输出的影响的好处。除了维护现有方法的见解外,Shapley Flow还将基于集合的基于集合的视图扩展到基于游戏理论的解释方法中,\ textit {基于图形{基于图形},视图。此基于图的视图使用户能够通过系统了解重要性以及有关潜在干预措施的原因。

Many existing approaches for estimating feature importance are problematic because they ignore or hide dependencies among features. A causal graph, which encodes the relationships among input variables, can aid in assigning feature importance. However, current approaches that assign credit to nodes in the causal graph fail to explain the entire graph. In light of these limitations, we propose Shapley Flow, a novel approach to interpreting machine learning models. It considers the entire causal graph, and assigns credit to \textit{edges} instead of treating nodes as the fundamental unit of credit assignment. Shapley Flow is the unique solution to a generalization of the Shapley value axioms to directed acyclic graphs. We demonstrate the benefit of using Shapley Flow to reason about the impact of a model's input on its output. In addition to maintaining insights from existing approaches, Shapley Flow extends the flat, set-based, view prevalent in game theory based explanation methods to a deeper, \textit{graph-based}, view. This graph-based view enables users to understand the flow of importance through a system, and reason about potential interventions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源