论文标题
可解释基于MRI的阿尔茨海默氏病诊断的可解释的CNN
Explainable Deep CNNs for MRI-Based Diagnosis of Alzheimer's Disease
论文作者
论文摘要
深层卷积神经网络(CNN)正在成为使用脑磁共振成像(MRI)半自动诊断阿尔茨海默氏病(AD)的重要模型。尽管高度准确,但Deep CNN模型缺乏透明度和解释性,排除了足够的临床推理,并且不符合大多数当前的监管要求。解释深层图像模型的一种流行选择是阻塞图像的区域,以隔离它们对预测的影响。但是,现有的遮挡脑扫描斑块的方法会在模型被训练的分布之外产生图像,从而导致不可靠的解释。在本文中,我们提出了一种专门为脑扫描任务设计的替代解释方法。我们称为掉期测试的方法会产生热图,这些热图描绘了大脑区域最能指示AD的区域,从而以临床医生可以理解的格式为模型的决策提供了可解释性。使用公理评估的实验结果表明,所提出的方法更适合使用MRI解释AD的诊断,而在使用典型的闭塞测试时,观察到相反的趋势。因此,我们认为我们的方法可以解决能够诊断AD的深神经网络的固有的黑盒本质。
Deep Convolutional Neural Networks (CNNs) are becoming prominent models for semi-automated diagnosis of Alzheimer's Disease (AD) using brain Magnetic Resonance Imaging (MRI). Although being highly accurate, deep CNN models lack transparency and interpretability, precluding adequate clinical reasoning and not complying with most current regulatory demands. One popular choice for explaining deep image models is occluding regions of the image to isolate their influence on the prediction. However, existing methods for occluding patches of brain scans generate images outside the distribution to which the model was trained for, thus leading to unreliable explanations. In this paper, we propose an alternative explanation method that is specifically designed for the brain scan task. Our method, which we refer to as Swap Test, produces heatmaps that depict the areas of the brain that are most indicative of AD, providing interpretability for the model's decisions in a format understandable to clinicians. Experimental results using an axiomatic evaluation show that the proposed method is more suitable for explaining the diagnosis of AD using MRI while the opposite trend was observed when using a typical occlusion test. Therefore, we believe our method may address the inherent black-box nature of deep neural networks that are capable of diagnosing AD.