论文标题

部分可观测时空混沌系统的无模型预测

Truthful Meta-Explanations for Local Interpretability of Machine Learning Models

论文作者

Mollas, Ioannis, Bassiliades, Nick, Tsoumakas, Grigorios

论文摘要

自动化机器学习的系统集成到各种任务中的集成由于其性能和速度而扩大了。尽管使用基于ML的系统有许多优势,但如果它们不可解释,则不应在人类生命处于危险中的关键高风险应用中使用它们。为了解决这个问题,研究人员和企业一直专注于寻找改善复杂ML系统的解释性的方法,并开发了几种此类方法。的确,有很多发达的技术,即使使用评估指标,从业者也很难为他们的应用选择最好的技术。结果,很明显,对选择工具的需求,一种基于高质量评估指标的元解释技术。在本文中,我们提出了一种本地元解释技术,该技术以真实性指标为基础,这是一个基于忠实的指标。我们通过具体定义所有概念并通过实验来证明技术和度量标准的有效性。

Automated Machine Learning-based systems' integration into a wide range of tasks has expanded as a result of their performance and speed. Although there are numerous advantages to employing ML-based systems, if they are not interpretable, they should not be used in critical, high-risk applications where human lives are at risk. To address this issue, researchers and businesses have been focusing on finding ways to improve the interpretability of complex ML systems, and several such methods have been developed. Indeed, there are so many developed techniques that it is difficult for practitioners to choose the best among them for their applications, even when using evaluation metrics. As a result, the demand for a selection tool, a meta-explanation technique based on a high-quality evaluation metric, is apparent. In this paper, we present a local meta-explanation technique which builds on top of the truthfulness metric, which is a faithfulness-based metric. We demonstrate the effectiveness of both the technique and the metric by concretely defining all the concepts and through experimentation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源