论文标题

深层网络中的混合私人遗忘

Mixed-Privacy Forgetting in Deep Networks

论文作者

Golatkar, Aditya, Achille, Alessandro, Ravichandran, Avinash, Polito, Marzia, Soatto, Stefano

论文摘要

我们表明,可以从接受大规模图像分类任务训练的网络的重量中删除训练样本子集的影响,或者“遗忘”,并且在忘记后,我们就剩余信息的量提供了强大的可计算界限。受到遗忘技术的现实应用程序的启发,我们引入了一个新颖的遗忘概念,即在混合私人环境中忘记,我们知道训练样本的“核心”子集无需忘记。尽管问题的这种变化在概念上很简单,但我们表明,在这种环境中工作可显着提高忘记应用于视觉分类任务的准确性和保证。此外,我们的方法可以有效地删除非核心数据中包含的所有信息,只需设置为零,而性能损失最小。我们通过用合适的线性近似代替标准的深网来实现这些结果。随着网络体系结构和培训程序的适当更改,我们表明这种线性近似与原始网络相当,并且遗忘问题变得二次,即使对于大型模型也可以有效地解决。与以前的深层网络上的遗忘方法不同,我们的方法可以在大规模视觉任务上达到最新的准确性。特别是,我们表明我们的方法允许忘记而不必权衡模型准确性。

We show that the influence of a subset of the training samples can be removed -- or "forgotten" -- from the weights of a network trained on large-scale image classification tasks, and we provide strong computable bounds on the amount of remaining information after forgetting. Inspired by real-world applications of forgetting techniques, we introduce a novel notion of forgetting in mixed-privacy setting, where we know that a "core" subset of the training samples does not need to be forgotten. While this variation of the problem is conceptually simple, we show that working in this setting significantly improves the accuracy and guarantees of forgetting methods applied to vision classification tasks. Moreover, our method allows efficient removal of all information contained in non-core data by simply setting to zero a subset of the weights with minimal loss in performance. We achieve these results by replacing a standard deep network with a suitable linear approximation. With opportune changes to the network architecture and training procedure, we show that such linear approximation achieves comparable performance to the original network and that the forgetting problem becomes quadratic and can be solved efficiently even for large models. Unlike previous forgetting methods on deep networks, ours can achieve close to the state-of-the-art accuracy on large scale vision tasks. In particular, we show that our method allows forgetting without having to trade off the model accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源