论文标题

具有模型反转的私人生成对抗网络

Differentially Private Generative Adversarial Networks with Model Inversion

论文作者

Chen, Dongjie, Cheung, Sen-ching Samson, Chuah, Chen-Nee, Ozonoff, Sally

论文摘要

为了保护训练生成对抗网络(GAN)中的敏感数据,标准方法是使用差异私有(DP)随机梯度下降方法,其中将受控的噪声添加到梯度中。输出合成样品的质量可能会受到不利影响,并且在存在这些噪音的情况下,网络的训练甚至可能不会收敛。我们提出了不同的私有模型反转(DPMI)方法,其中首先通过公共发电机将私有数据映射到潜在空间,然后使用具有更好收敛性的较低维度DP-GAN。标准数据集CIFAR10和SVHN以及面部地标数据集的实验结果表明,我们的方法在同一隐私保证下,基于启动评分,FréchetInception距离和分类精度,优于基于Inception评分,FréchetInception距离和分类精度的标准DP-GAN方法。

To protect sensitive data in training a Generative Adversarial Network (GAN), the standard approach is to use differentially private (DP) stochastic gradient descent method in which controlled noise is added to the gradients. The quality of the output synthetic samples can be adversely affected and the training of the network may not even converge in the presence of these noises. We propose Differentially Private Model Inversion (DPMI) method where the private data is first mapped to the latent space via a public generator, followed by a lower-dimensional DP-GAN with better convergent properties. Experimental results on standard datasets CIFAR10 and SVHN as well as on a facial landmark dataset for Autism screening show that our approach outperforms the standard DP-GAN method based on Inception Score, Fréchet Inception Distance, and classification accuracy under the same privacy guarantee.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源