论文标题

人类互动的透明度和信任:模型 - 不合稳定解释在基于计算机视觉的决策支持中的作用

Transparency and Trust in Human-AI-Interaction: The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support

论文作者

Meske, Christian, Bunde, Enrico

论文摘要

计算机视觉,因此基于人工智能从图像中提取信息,在过去的几年中,越来越多地受到关注,例如在医学诊断方面。尽管算法的复杂性是其性能提高的原因,但它也导致了“黑匣子”问题,因此减少了对AI的信任。在这方面,“可解释的人工智能”(XAI)允许打开黑匣子并提高AI透明度的程度。在本文中,我们首先讨论了解释性对信任对AI的理论影响,然后展示了XAI在与健康相关的环境中的使用情况。更具体地说,我们展示了如何应用XAI来理解为什么基于深度学习的计算机视觉在图像数据(薄血涂片幻灯片图像)上发现或未检测到疾病(疟疾)。此外,我们研究如何使用XAI比较通常用于计算机视觉的两个不同深度学习模型的检测策略:卷积神经网络和多层感知器。我们的经验结果表明,i)AI有时使用图像的可疑或无关数据特征来检测疟疾(即使正确预测),ii)ii)在不同的深度学习模型如何解释相同的预测方面可能存在很大的差异。我们的理论讨论强调了XAI可以支持对计算机视觉系统和AI系统的信任,尤其是通过增加的可理解性和可预测性。

Computer Vision, and hence Artificial Intelligence-based extraction of information from images, has increasingly received attention over the last years, for instance in medical diagnostics. While the algorithms' complexity is a reason for their increased performance, it also leads to the "black box" problem, consequently decreasing trust towards AI. In this regard, "Explainable Artificial Intelligence" (XAI) allows to open that black box and to improve the degree of AI transparency. In this paper, we first discuss the theoretical impact of explainability on trust towards AI, followed by showcasing how the usage of XAI in a health-related setting can look like. More specifically, we show how XAI can be applied to understand why Computer Vision, based on deep learning, did or did not detect a disease (malaria) on image data (thin blood smear slide images). Furthermore, we investigate, how XAI can be used to compare the detection strategy of two different deep learning models often used for Computer Vision: Convolutional Neural Network and Multi-Layer Perceptron. Our empirical results show that i) the AI sometimes used questionable or irrelevant data features of an image to detect malaria (even if correctly predicted), and ii) that there may be significant discrepancies in how different deep learning models explain the same prediction. Our theoretical discussion highlights that XAI can support trust in Computer Vision systems, and AI systems in general, especially through an increased understandability and predictability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源