论文标题

个人隐私会计差异化随机梯度下降

Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent

论文作者

Yu, Da, Kamath, Gautam, Kulkarni, Janardhan, Liu, Tie-Yan, Yin, Jian, Zhang, Huishuai

论文摘要

私人随机梯度下降(DP-SGD)是私人深度学习最新进展的主力算法。它为数据集中的所有数据点提供了单个隐私保证。我们提出了特定于输出的$(\ Varepsilon,δ)$ - DP,以表征由DP-SGD培训的模型时单个示例的隐私保证。我们还设计了一种有效的算法,以调查许多数据集中的个人隐私。我们发现,大多数示例比最严重的案例拥有更强的隐私保证。我们进一步发现,训练损失和示例的隐私参数非常相关。这意味着在模型公用事业方面,同时承受的群体会经历较弱的隐私保证。例如,在CIFAR-10上,测试精度最低的班级的平均$ \ varepsilon $比班级高度为44.2 \%,精度最高。

Differentially private stochastic gradient descent (DP-SGD) is the workhorse algorithm for recent advances in private deep learning. It provides a single privacy guarantee to all datapoints in the dataset. We propose output-specific $(\varepsilon,δ)$-DP to characterize privacy guarantees for individual examples when releasing models trained by DP-SGD. We also design an efficient algorithm to investigate individual privacy across a number of datasets. We find that most examples enjoy stronger privacy guarantees than the worst-case bound. We further discover that the training loss and the privacy parameter of an example are well-correlated. This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees. For example, on CIFAR-10, the average $\varepsilon$ of the class with the lowest test accuracy is 44.2\% higher than that of the class with the highest accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源