论文标题
可解释的知识系统的方向
Directions for Explainable Knowledge-Enabled Systems
论文作者
论文摘要
对可解释的人工智能领域的兴趣已经增长了数十年,并且最近加速了。随着人工智能模型已变得越来越复杂,通常更不透明,并结合了复杂的机器学习技术,解释性变得更加关键。最近,研究人员一直在以用户为中心的重点调查和解决解释性,寻找解释以考虑可信度,可理解性,明确的出处和上下文意识。在本章中,我们利用了对人工智能和密切相关领域的解释文献调查,并利用过去的努力来产生一系列解释类型,我们认为这些解释类型反映了对当今人工智能应用程序的解释需求的扩展需求。我们定义每种类型,并提供一个示例问题,可以激发对这种说明方式的需求。我们认为,这套解释类型将帮助未来的系统设计师创造并确定需求的优先级,并进一步帮助产生与用户和情境需求更好的解释。
Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users' and situational needs.