论文标题

基于项目响应理论(Exirt)的解释:一种特定于模型的方法,可以在信任的角度解释树的模型

Explanations Based on Item Response Theory (eXirt): A Model-Specific Method to Explain Tree-Ensemble Model in Trust Perspective

论文作者

Ribeiro, José, Cardoso, Lucas, Silva, Raíssa, Cirilo, Vitor, Carneiro, Níkolas, Alves, Ronnie

论文摘要

近年来,XAI研究人员一直在形式上提出建议,并开发了解释黑匣子模型的新方法,社区中没有一般共识来解释这些模型,这种选择几乎与特定方法的普及直接相关。 CIU,Dalex,Eli5,Lofo,Shap和Skater之类的方法提出了通过特征相关性的全球排名来解释黑匣子模型的建议,这些方法基于不同的方法论,生成了全局解释,这些解释表明该模型的输入如何解释其预测。在这种情况下,使用了41个数据集,4种树干算法(轻度梯度提升,catboost,随机森林和梯度提升)以及6种XAI方法来支持启动新的XAI方法,该方法基于项目响应理论,基于项目响应理论 - IRT- IRT,并针对树木构造的黑框模型,这些黑匣子模型对拿起tagular tagular tagular数据转介构成了tagular todrarmulation Cartion Crinate Crinate rinary Crancial Criendring问题。在第一组分析中,将164个全球功能相关等级与文献中存在的其他XAI方法中的984个等级进行了比较,试图强调它们的相似性和差异。在第二次分析中,提出了基于示例解释的Exirt的独家解释,以帮助理解模型信任。因此,已经验证了Exirt能够通过IRT生成对树木填充模型的全局解释以及模型实例的局部解释,这表明如何在机器学习中使用这种合并理论,以获得可解释且可靠的模型。

In recent years, XAI researchers have been formalizing proposals and developing new methods to explain black box models, with no general consensus in the community on which method to use to explain these models, with this choice being almost directly linked to the popularity of a specific method. Methods such as Ciu, Dalex, Eli5, Lofo, Shap and Skater emerged with the proposal to explain black box models through global rankings of feature relevance, which based on different methodologies, generate global explanations that indicate how the model's inputs explain its predictions. In this context, 41 datasets, 4 tree-ensemble algorithms (Light Gradient Boosting, CatBoost, Random Forest, and Gradient Boosting), and 6 XAI methods were used to support the launch of a new XAI method, called eXirt, based on Item Response Theory - IRT and aimed at tree-ensemble black box models that use tabular data referring to binary classification problems. In the first set of analyses, the 164 global feature relevance ranks of the eXirt were compared with 984 ranks of the other XAI methods present in the literature, seeking to highlight their similarities and differences. In a second analysis, exclusive explanations of the eXirt based on Explanation-by-example were presented that help in understanding the model trust. Thus, it was verified that eXirt is able to generate global explanations of tree-ensemble models and also local explanations of instances of models through IRT, showing how this consolidated theory can be used in machine learning in order to obtain explainable and reliable models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源