论文标题

概念嵌入分析:评论

Concept Embedding Analysis: A Review

论文作者

Schwalbe, Gesina

论文摘要

深度神经网络(DNNS)已经进入了许多应用程序,对人机系统的安全性,安全性和公平性有潜在的影响。这样需要用户的基本理解和足够的信任。这促进了可解释的人工智能(XAI)的研究领域,即寻找打开“黑盒” DNN代表的方法。对于计算机视觉域中,对DNN的特定实际评估需要全球有效的人类可解释概念与模型的内部结合。概念研究领域(嵌入)分析(CA)解决了这个问题:CA旨在找到与DNN内部表示的人类可解释的语义概念(例如,眼睛,胡须)的全球,可评估的关联。这项工作确立了CA方法的一般定义和CA方法的分类法,结合了文献中的几个想法。这可以轻松定位并比较CA方法。在定义的概念的指导下,审查了有关CA方法和有趣应用的当前最新研究。讨论,比较和分类了三十多种相关方法。最后,对于从业者来说,提供了15个数据集的调查,该调查已用于监督概念分析。最终指出了开放的挑战和研究方向。

Deep neural networks (DNNs) have found their way into many applications with potential impact on the safety, security, and fairness of human-machine-systems. Such require basic understanding and sufficient trust by the users. This motivated the research field of explainable artificial intelligence (XAI), i.e. finding methods for opening the "black-boxes" DNNs represent. For the computer vision domain in specific, practical assessment of DNNs requires a globally valid association of human interpretable concepts with internals of the model. The research field of concept (embedding) analysis (CA) tackles this problem: CA aims to find global, assessable associations of humanly interpretable semantic concepts (e.g., eye, bearded) with internal representations of a DNN. This work establishes a general definition of CA and a taxonomy for CA methods, uniting several ideas from literature. That allows to easily position and compare CA approaches. Guided by the defined notions, the current state-of-the-art research regarding CA methods and interesting applications are reviewed. More than thirty relevant methods are discussed, compared, and categorized. Finally, for practitioners, a survey of fifteen datasets is provided that have been used for supervised concept analysis. Open challenges and research directions are pointed out at the end.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源