论文标题

差异隐私的稳健威胁

Robustness Threats of Differential Privacy

论文作者

Tursynbek, Nurislam, Petiushko, Aleksandr, Oseledets, Ivan

论文摘要

差异隐私(DP)是衡量和保证数据分析隐私的金标准概念。众所周知,将DP添加到深度学习模型的成本在于它的准确性。但是,尚不清楚它如何影响模型的鲁棒性。标准的神经网络对不同的输入扰动并不强大:对抗性攻击或常见的损坏。在本文中,我们从经验上观察到神经网络的隐私与鲁棒性之间的折衷很有趣。我们通过实验表明,与非私人版本相比,在某些设置中接受了DP培训的网络可能更脆弱。为了探讨这一点,我们广泛研究了不同的鲁棒性测量值,包括FGSM和PGD对手,线性决策边界的距离,弯曲曲线以及损坏的数据集中的性能。最后,我们研究了差异私有神经网络训练的主要成分,例如梯度剪辑和增加噪声,影响(减少和增加)模型的鲁棒性。

Differential privacy (DP) is a gold-standard concept of measuring and guaranteeing privacy in data analysis. It is well-known that the cost of adding DP to deep learning model is its accuracy. However, it remains unclear how it affects robustness of the model. Standard neural networks are not robust to different input perturbations: either adversarial attacks or common corruptions. In this paper, we empirically observe an interesting trade-off between privacy and robustness of neural networks. We experimentally demonstrate that networks, trained with DP, in some settings might be even more vulnerable in comparison to non-private versions. To explore this, we extensively study different robustness measurements, including FGSM and PGD adversaries, distance to linear decision boundaries, curvature profile, and performance on a corrupted dataset. Finally, we study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect (decrease and increase) the robustness of the model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源