论文标题
内核概念擦除
Kernelized Concept Erasure
论文作者
论文摘要
在训练过程中,神经模型的表示文本数据的表示空间以无监督的方式出现。了解这些表示形式如何编码人解剖概念是一个基本问题。在神经表示中识别概念的一种突出方法是搜索一个线性子空间,该子空间的擦除阻止了从表示形式预测概念。但是,尽管许多线性擦除算法是可探讨且可解释的,但神经网络并不一定以线性方式代表概念。为了识别非线性编码的概念,我们提出了一个线性最小游戏以进行概念擦除的内核化。我们证明,可以防止特定的非线性对手预测概念。但是,保护不会转移到不同的非线性对手。因此,详尽的擦除非线性编码的概念仍然是一个空旷的问题。
The representation space of neural models for textual data emerges in an unsupervised manner during training. Understanding how those representations encode human-interpretable concepts is a fundamental problem. One prominent approach for the identification of concepts in neural representations is searching for a linear subspace whose erasure prevents the prediction of the concept from the representations. However, while many linear erasure algorithms are tractable and interpretable, neural networks do not necessarily represent concepts in a linear manner. To identify non-linearly encoded concepts, we propose a kernelization of a linear minimax game for concept erasure. We demonstrate that it is possible to prevent specific non-linear adversaries from predicting the concept. However, the protection does not transfer to different nonlinear adversaries. Therefore, exhaustively erasing a non-linearly encoded concept remains an open problem.