论文标题
XAI的基准测试以实例为中心的反事实算法:从白盒到黑框
Benchmarking Instance-Centric Counterfactual Algorithms for XAI: From White Box to Black Box
论文作者
论文摘要
这项研究通过对三种不同类型的模型进行基准评估来调查机器学习模型对反事实解释的产生的影响:决策树(完全透明,可解释的,白盒模型),随机森林(半开采,灰色盒子,灰色盒子模型)和神经网络(完整的,不透明的,黑盒模型)。我们在25个不同数据集中的文献中使用四种算法(DICE,WATCERCF,PROTYPE和GRAWINGSPF)测试了反事实生成过程。我们的发现表明:(1)不同的机器学习模型对反事实解释的产生几乎没有影响; (2)基于接近性损失功能的唯一算法是不可行的,不会提供有意义的解释; (3)在不保证反事实生成的合理性的情况下,人们无法获得有意义的评估结果。如果对当前的最新指标进行评估,则不考虑其内部机制中不合格的算法将导致偏见和不可靠的结论; (4)强烈建议对反事实检验分析进行反事实的解释和偏见的潜在识别。
This study investigates the impact of machine learning models on the generation of counterfactual explanations by conducting a benchmark evaluation over three different types of models: a decision tree (fully transparent, interpretable, white-box model), a random forest (semi-interpretable, grey-box model), and a neural network (fully opaque, black-box model). We tested the counterfactual generation process using four algorithms (DiCE, WatcherCF, prototype, and GrowingSpheresCF) in the literature in 25 different datasets. Our findings indicate that: (1) Different machine learning models have little impact on the generation of counterfactual explanations; (2) Counterfactual algorithms based uniquely on proximity loss functions are not actionable and will not provide meaningful explanations; (3) One cannot have meaningful evaluation results without guaranteeing plausibility in the counterfactual generation. Algorithms that do not consider plausibility in their internal mechanisms will lead to biased and unreliable conclusions if evaluated with the current state-of-the-art metrics; (4) A counterfactual inspection analysis is strongly recommended to ensure a robust examination of counterfactual explanations and the potential identification of biases.