论文标题
审查对可解释的人工智能的需求(XAI)
Reviewing the Need for Explainable Artificial Intelligence (xAI)
论文作者
论文摘要
人工智能(AI)应用程序在组织和社会中的扩散促进了解释AI决策的研究。可解释的AI(XAI)领域正在迅速扩展,以提取信息并可视化AI技术的输出(例如深神经网络)的多种方法。但是,我们对XAI研究如何满足可解释AI的需求有限。我们对XAI文献进行了有关该主题的系统综述,并确定了XAI如何解决黑框问题的四个主题辩论。基于对XAI奖学金的批判性分析,我们将研究结果综合为未来的研究议程,以进一步发展知识体系。
The diffusion of artificial intelligence (AI) applications in organizations and society has fueled research on explaining AI decisions. The explainable AI (xAI) field is rapidly expanding with numerous ways of extracting information and visualizing the output of AI technologies (e.g. deep neural networks). Yet, we have a limited understanding of how xAI research addresses the need for explainable AI. We conduct a systematic review of xAI literature on the topic and identify four thematic debates central to how xAI addresses the black-box problem. Based on this critical analysis of the xAI scholarship we synthesize the findings into a future research agenda to further the xAI body of knowledge.