论文标题
选择性忘记了比样本更好的深层网络
Selective Forgetting of Deep Networks at a Finer Level than Samples
论文作者
论文摘要
选择性遗忘或从深神经网络(DNN)中删除信息对于持续学习至关重要,并且在控制DNN方面具有挑战性。从实际意义上讲,这种遗忘也至关重要,因为部署的DNN可以与离群值,被攻击者毒害或泄漏/敏感的信息一起对数据进行培训。在本文中,我们制定了选择性忘记的分类任务比样品级别更好。我们根据以下两个条件区分的四个数据集指定较细胞:它们是否包含要忘记的信息以及是否可用于遗忘过程。此外,我们通过显示具体和实用的情况来揭示对数据集进行此类配方的必要性。此外,我们将遗忘程序作为三个标准的优化问题介绍。遗忘,更正和记忆术语。实验结果表明,所提出的方法可以使模型忘记使用特定信息进行分类。值得注意的是,在特定情况下,我们的方法提高了模型在数据集上的准确性,其中包含要忘记的信息,但在遗忘过程中不可用。在实际情况下,意外发现并错误地分类了此类数据。
Selective forgetting or removing information from deep neural networks (DNNs) is essential for continual learning and is challenging in controlling the DNNs. Such forgetting is crucial also in a practical sense since the deployed DNNs may be trained on the data with outliers, poisoned by attackers, or with leaked/sensitive information. In this paper, we formulate selective forgetting for classification tasks at a finer level than the samples' level. We specify the finer level based on four datasets distinguished by two conditions: whether they contain information to be forgotten and whether they are available for the forgetting procedure. Additionally, we reveal the need for such formulation with the datasets by showing concrete and practical situations. Moreover, we introduce the forgetting procedure as an optimization problem on three criteria; the forgetting, the correction, and the remembering term. Experimental results show that the proposed methods can make the model forget to use specific information for classification. Notably, in specific cases, our methods improved the model's accuracy on the datasets, which contains information to be forgotten but is unavailable in the forgetting procedure. Such data are unexpectedly found and misclassified in actual situations.