论文标题
具有多个连续输出的机器人系统的实时反事实解释
Real-Time Counterfactual Explanations For Robotic Systems With Multiple Continuous Outputs
论文作者
论文摘要
尽管许多机器学习方法,尤其是从深度学习领域中,在应对机器人应用中的挑战方面起了重要作用,但在这些方法可以提供性能和安全保证之前,我们无法充分利用此类方法。缺乏阻碍使用这些方法的信任主要是由于人类对机器学习模型所学的知识以及其行为的鲁棒性缺乏理解。这是可解释的人工智能旨在解决的问题。基于社会科学的见解,我们知道人类更喜欢对比解释,即\解释回答了假设的问题“如果?”。在本文中,我们表明线性模型树能够为机器人系统(包括多个连续的输入和输出)提供此类问题的答案,所谓的反事实解释。我们证明了这种方法用于为两个机器人应用产生反事实解释。此外,我们探讨了不可行的问题,这在由物理定律管辖的系统中特别感兴趣。
Although many machine learning methods, especially from the field of deep learning, have been instrumental in addressing challenges within robotic applications, we cannot take full advantage of such methods before these can provide performance and safety guarantees. The lack of trust that impedes the use of these methods mainly stems from a lack of human understanding of what exactly machine learning models have learned, and how robust their behaviour is. This is the problem the field of explainable artificial intelligence aims to solve. Based on insights from the social sciences, we know that humans prefer contrastive explanations, i.e.\ explanations answering the hypothetical question "what if?". In this paper, we show that linear model trees are capable of producing answers to such questions, so-called counterfactual explanations, for robotic systems, including in the case of multiple, continuous inputs and outputs. We demonstrate the use of this method to produce counterfactual explanations for two robotic applications. Additionally, we explore the issue of infeasibility, which is of particular interest in systems governed by the laws of physics.