论文标题
对时间序列任务的深度学习模型的可解释AI技术的实证研究
An Empirical Study of Explainable AI Techniques on Deep Learning Models For Time Series Tasks
论文作者
论文摘要
机器学习的决策说明通常是通过应用可解释的AI(XAI)技术来生成的。但是,许多提出的XAI方法产生未验证的输出。评估和验证通常是通过人类对单个图像或文本的视觉解释来实现的。在此预注册中,我们提出了一项实证研究和基准框架,以应用为图像和文本数据开发的神经网络的归因方法。我们提出了一种使用扰动方法来自动评估和排名归因技术在时间序列上的方法,以识别可靠的方法。
Decision explanations of machine learning black-box models are often generated by applying Explainable AI (XAI) techniques. However, many proposed XAI methods produce unverified outputs. Evaluation and verification are usually achieved with a visual interpretation by humans on individual images or text. In this preregistration, we propose an empirical study and benchmark framework to apply attribution methods for neural networks developed for images and text data on time series. We present a methodology to automatically evaluate and rank attribution techniques on time series using perturbation methods to identify reliable approaches.