论文标题

GS-WGAN:一种学习差异化私人发电机的梯度销售方法

GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators

论文作者

Chen, Dingfan, Orekondy, Tribhuvanesh, Fritz, Mario

论文摘要

丰富数据的广泛可用性推动了许多域中机器学习应用的增长。然而,随着数据的私人性质禁止其共享,具有高度敏感的数据(例如医学)的域的增长在很大程度上受到阻碍。为此,我们提出了梯度销售的Wasserstein生成对抗网络(GS-WGAN),该网络允许使用严格的隐私保证释放敏感数据的消毒形式。与先前的工作相反,我们的方法能够更精确地扭曲梯度信息,从而使培训更深的模型能够产生更有信息的样本。此外,我们的配方自然允许在集中式和联合(即分散)数据方案中培训gan。通过广泛的实验,我们发现我们的方法始终超过多个指标(例如样本质量)和数据集的最先进方法。

The wide-spread availability of rich data has fueled the growth of machine learning applications in numerous domains. However, growth in domains with highly-sensitive data (e.g., medical) is largely hindered as the private nature of data prohibits it from being shared. To this end, we propose Gradient-sanitized Wasserstein Generative Adversarial Networks (GS-WGAN), which allows releasing a sanitized form of the sensitive data with rigorous privacy guarantees. In contrast to prior work, our approach is able to distort gradient information more precisely, and thereby enabling training deeper models which generate more informative samples. Moreover, our formulation naturally allows for training GANs in both centralized and federated (i.e., decentralized) data scenarios. Through extensive experiments, we find our approach consistently outperforms state-of-the-art approaches across multiple metrics (e.g., sample quality) and datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源