论文标题
Shapley值的集群重要性:训练数据的簇如何影响预测
Shapley values for cluster importance: How clusters of the training data affect a prediction
论文作者
论文摘要
本文提出了一种新的方法来解释数据驱动方法做出的预测。由于此类预测在很大程度上取决于用于培训的数据,因此传达有关培训数据如何影响预测的信息的解释是有用的。本文提出了一种新的方法来量化培训数据的不同数据群体如何影响预测。量化基于沙普利价值观,该概念源自联合游戏理论,旨在在一组合作的玩家中公平分配支出。玩家的莎普利价值是该玩家贡献的衡量标准。沙普利值通常用于量化特征重要性,即。功能如何影响预测。本文将其扩展到了重要性,使培训数据的簇在游戏中是付费的游戏中作为玩家。本文提出的新方法使我们能够探索并研究培训数据的不同集群如何影响任何黑框模型的预测,从而使预测模型的推理和内部工作的新方面可以传达给用户。该方法与现有的解释方法根本不同,提供了否则就无法提供的见解,并应补充现有的解释方法,包括基于特征重要性的解释。
This paper proposes a novel approach to explain the predictions made by data-driven methods. Since such predictions rely heavily on the data used for training, explanations that convey information about how the training data affects the predictions are useful. The paper proposes a novel approach to quantify how different data-clusters of the training data affect a prediction. The quantification is based on Shapley values, a concept which originates from coalitional game theory, developed to fairly distribute the payout among a set of cooperating players. A player's Shapley value is a measure of that player's contribution. Shapley values are often used to quantify feature importance, ie. how features affect a prediction. This paper extends this to cluster importance, letting clusters of the training data act as players in a game where the predictions are the payouts. The novel methodology proposed in this paper lets us explore and investigate how different clusters of the training data affect the predictions made by any black-box model, allowing new aspects of the reasoning and inner workings of a prediction model to be conveyed to the users. The methodology is fundamentally different from existing explanation methods, providing insight which would not be available otherwise, and should complement existing explanation methods, including explanations based on feature importance.