论文标题
原色形式:专注于视觉变压器中的原型零件,用于可解释的图像识别
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition
论文作者
论文摘要
原型部分网络(Protopnet)引起了广泛的关注,并增加了许多随访研究,因为它的自我解释特性可解释人工智能(XAI)。但是,当直接在视觉变压器(VIT)骨架上应用原始网络时,学到的原型存在“分心”问题:它们的可能性相对较高,可以被背景激活,并且对前景的关注较少。建模长期依赖性的强大能力使得基于变压器的Protopnet难以专注于原型部分,从而严重损害了其固有的解释性。本文提出了原型零件变压器(ProtoPformer),以适当有效地应用基于原型的方法,并使用VIT进行可解释的图像识别。提出的方法介绍了根据VIT的建筑特征捕获和突出目标的代表性整体和部分特征的全局和本地原型。采用全球原型提供对象的全球视图,以指导本地原型集中在前景上,同时消除背景的影响。之后,明确监督局部原型以专注于它们各自的原型视觉部分,从而提高了整体可解释性。广泛的实验表明,我们提出的全球和本地原型可以相互纠正并共同做出最终决策,这些决策分别忠实,透明地从整体和地方的角度促进决策过程。此外,ProtoPformer始终取得优于基于原型的原型基线(SOTA)的卓越性能和可视化结果。我们的代码已在https://github.com/zju-vipa/protopformer上发布。
Prototypical part network (ProtoPNet) has drawn wide attention and boosted many follow-up studies due to its self-explanatory property for explainable artificial intelligence (XAI). However, when directly applying ProtoPNet on vision transformer (ViT) backbones, learned prototypes have a "distraction" problem: they have a relatively high probability of being activated by the background and pay less attention to the foreground. The powerful capability of modeling long-term dependency makes the transformer-based ProtoPNet hard to focus on prototypical parts, thus severely impairing its inherent interpretability. This paper proposes prototypical part transformer (ProtoPFormer) for appropriately and effectively applying the prototype-based method with ViTs for interpretable image recognition. The proposed method introduces global and local prototypes for capturing and highlighting the representative holistic and partial features of targets according to the architectural characteristics of ViTs. The global prototypes are adopted to provide the global view of objects to guide local prototypes to concentrate on the foreground while eliminating the influence of the background. Afterwards, local prototypes are explicitly supervised to concentrate on their respective prototypical visual parts, increasing the overall interpretability. Extensive experiments demonstrate that our proposed global and local prototypes can mutually correct each other and jointly make final decisions, which faithfully and transparently reason the decision-making processes associatively from the whole and local perspectives, respectively. Moreover, ProtoPFormer consistently achieves superior performance and visualization results over the state-of-the-art (SOTA) prototype-based baselines. Our code has been released at https://github.com/zju-vipa/ProtoPFormer.