论文标题

隐私洋葱效应:记​​忆是相对的

The Privacy Onion Effect: Memorization is Relative

论文作者

Carlini, Nicholas, Jagielski, Matthew, Zhang, Chiyuan, Papernot, Nicolas, Terzis, Andreas, Tramer, Florian

论文摘要

在私人数据集上训练的机器学习模型已被证明可以泄露其私人数据。尽管最近的工作发现平均数据点很少被泄漏,但离群样本通常会经历记忆,因此,隐私泄漏。我们演示和分析了记忆的洋葱效应:删除最容易受到隐私攻击的离群点的“层”,这会使以前安全的新层暴露于相同的攻击。我们执行几个实验来研究这种效果,并了解其为什么发生。这种效果的存在有各种后果。例如,它表明,在没有严格的隐私保证培训的情况下防御记忆的建议不太可能有效。此外,它表明,诸如机器学习之类的增强隐私技术实际上可能会损害其他用户的隐私。

Machine learning models trained on private datasets have been shown to leak their private data. While recent work has found that the average data point is rarely leaked, the outlier samples are frequently subject to memorization and, consequently, privacy leakage. We demonstrate and analyse an Onion Effect of memorization: removing the "layer" of outlier points that are most vulnerable to a privacy attack exposes a new layer of previously-safe points to the same attack. We perform several experiments to study this effect, and understand why it occurs. The existence of this effect has various consequences. For example, it suggests that proposals to defend against memorization without training with rigorous privacy guarantees are unlikely to be effective. Further, it suggests that privacy-enhancing technologies such as machine unlearning could actually harm the privacy of other users.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源