论文标题
计算绑架树木的解释
Computing Abductive Explanations for Boosted Trees
论文作者
论文摘要
增压树是一种主要的ML模型,表现出高度精度。但是,增强的树木几乎不可理解,每当将它们用于安全至关重要的应用中时,这都是一个问题。确实,在这种情况下,预期对所做预测的严格解释。最近的工作表明,如何使用自动推理技术为增强树而得出子集的最小绑架解释。但是,在一般情况下,这种有充分的解释的产生是棘手的。为了提高他们这一代的可扩展性,我们介绍了树木特定的解释的概念。我们表明,特定于树的解释是可以在多项式时间内计算的绑架解释。我们还解释了如何从特定于树的解释中得出子集最小绑架性解释。各种数据集上的实验显示了利用特定于树的解释的计算益处,以推导亚集中段绑架的解释。
Boosted trees is a dominant ML model, exhibiting high accuracy. However, boosted trees are hardly intelligible, and this is a problem whenever they are used in safety-critical applications. Indeed, in such a context, rigorous explanations of the predictions made are expected. Recent work have shown how subset-minimal abductive explanations can be derived for boosted trees, using automated reasoning techniques. However, the generation of such well-founded explanations is intractable in the general case. To improve the scalability of their generation, we introduce the notion of tree-specific explanation for a boosted tree. We show that tree-specific explanations are abductive explanations that can be computed in polynomial time. We also explain how to derive a subset-minimal abductive explanation from a tree-specific explanation. Experiments on various datasets show the computational benefits of leveraging tree-specific explanations for deriving subset-minimal abductive explanations.