论文标题
可解释的知识系统的基础
Foundations of Explainable Knowledge-Enabled Systems
论文作者
论文摘要
自人工智能初期以来,解释性一直是一个重要的目标。已经开发了几种产生解释的方法。但是,其中许多方法与当时的人工智能系统的功能紧密相结合。随着启用AI的系统有时在关键设置中的扩散,需要向最终用户和决策者解释它们。我们介绍了可解释的人工智能系统的历史概述,侧重于知识系统,涵盖专家系统,认知助手,语义应用和机器学习领域。此外,从过去的方法的优势中借用并确定以用户和上下文为重点的解释所需的差距,我们为解释和可解释的知识系统提出了新的定义。
Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artificial intelligence systems at the time. With the proliferation of AI-enabled systems in sometimes critical settings, there is a need for them to be explainable to end-users and decision-makers. We present a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains. Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.