论文标题
对自然语言处理的可解释AI状态的调查
A Survey of the State of Explainable AI for Natural Language Processing
论文作者
论文摘要
近年来,最先进的模型的质量取得了重要的进步,但这是以模型变得越来越易于解释为代价的。这项调查概述了自然语言处理(NLP)领域内的可解释AI(XAI)的当前状态。我们讨论了解释的主要分类,以及可以得出并可视化的各种解释方式。我们详细介绍了目前可用于生成NLP模型预测的解释的操作和解释性技术,以作为社区模型开发人员的资源。最后,我们指出了当前的差距,并鼓励在这一重要研究领域的未来工作方向。
Recent years have seen important advances in the quality of state-of-the-art models, but this has come at the expense of models becoming less interpretable. This survey presents an overview of the current state of Explainable AI (XAI), considered within the domain of Natural Language Processing (NLP). We discuss the main categorization of explanations, as well as the various ways explanations can be arrived at and visualized. We detail the operations and explainability techniques currently available for generating explanations for NLP model predictions, to serve as a resource for model developers in the community. Finally, we point out the current gaps and encourage directions for future work in this important research area.