论文标题

集中差异化私人和公用事业,以保存联合学习

Concentrated Differentially Private and Utility Preserving Federated Learning

论文作者

Hu, Rui, Guo, Yuanxiong, Gong, Yanmin

论文摘要

联合学习是一种机器学习设置,其中一组边缘设备在中央服务器的编排下协作训练模型而无需共享本地数据。在联合学习的每个通信回合中,边缘设备使用其本地数据执行随机梯度下降的多个步骤,然后将计算结果上传到服务器以进行模型更新。在此过程中,由于服务器不完全信任时,由于边缘设备和服务器之间的信息交换而产生了隐私泄漏的挑战。尽管一些以前的保存机制可以轻松地用于联合学习,但它们通常以高昂的成本来融合学习模型的算法和效用。在本文中,我们开发了一种联合学习方法,该方法通过局部梯度扰动,安全的聚合和零浓缩的差分隐私(ZCDP)的结合,解决了模型效用的无济于事,而无需太大的模型效用。我们为我们的方法提供严格的端到端隐私保证,并分析其理论收敛速度。通过对现实世界数据集的广泛数值实验,我们证明了我们提出的方法的有效性,并在隐私和模型实用程序之间表现出了较高的权衡。

Federated learning is a machine learning setting where a set of edge devices collaboratively train a model under the orchestration of a central server without sharing their local data. At each communication round of federated learning, edge devices perform multiple steps of stochastic gradient descent with their local data and then upload the computation results to the server for model update. During this process, the challenge of privacy leakage arises due to the information exchange between edge devices and the server when the server is not fully trusted. While some previous privacy-preserving mechanisms could readily be used for federated learning, they usually come at a high cost on convergence of the algorithm and utility of the learned model. In this paper, we develop a federated learning approach that addresses the privacy challenge without much degradation on model utility through a combination of local gradient perturbation, secure aggregation, and zero-concentrated differential privacy (zCDP). We provide a tight end-to-end privacy guarantee of our approach and analyze its theoretical convergence rates. Through extensive numerical experiments on real-world datasets, we demonstrate the effectiveness of our proposed method and show its superior trade-off between privacy and model utility.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源