论文标题

梯度稀疏可以提高差异偏见的凸机学习的性能

Gradient Sparsification Can Improve Performance of Differentially-Private Convex Machine Learning

论文作者

Farokhi, Farhad

论文摘要

我们使用梯度稀疏来减少差异隐私噪声对私人机器学习模型的性能的不利影响。为此,我们采用压缩感应和加性拉式噪声来评估差异私密的梯度。嘈杂的保护隐私梯度用于训练机器学习模型进行随机梯度下降。通过将最小的梯度条目设置为零,可以降低训练算法的收敛速度,从而实现了稀疏性。但是,通过稀疏和压缩感应,可以降低通信梯度的尺寸和添加噪声的大小。这些效果之间的相互作用决定了梯度稀疏是否改善了差异私有机器学习模型的性能。我们在本文中对此进行了分析。我们证明,对于少量隐私预算,压缩可以提高隐私机器学习模型的性能。但是,对于大型隐私预算,压缩不一定会改善性能。直觉上,这是因为在大型隐私预算制度中,保存隐私噪声的影响很小,因此梯度稀疏的改善无法弥补其较慢的收敛性。

We use gradient sparsification to reduce the adverse effect of differential privacy noise on performance of private machine learning models. To this aim, we employ compressed sensing and additive Laplace noise to evaluate differentially-private gradients. Noisy privacy-preserving gradients are used to perform stochastic gradient descent for training machine learning models. Sparsification, achieved by setting the smallest gradient entries to zero, can reduce the convergence speed of the training algorithm. However, by sparsification and compressed sensing, the dimension of communicated gradient and the magnitude of additive noise can be reduced. The interplay between these effects determines whether gradient sparsification improves the performance of differentially-private machine learning models. We investigate this analytically in the paper. We prove that, for small privacy budgets, compression can improve performance of privacy-preserving machine learning models. However, for large privacy budgets, compression does not necessarily improve the performance. Intuitively, this is because the effect of privacy-preserving noise is minimal in large privacy budget regime and thus improvements from gradient sparsification cannot compensate for its slower convergence.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源