论文标题
梯度反演攻击会使联合学习不安全吗?
Do Gradient Inversion Attacks Make Federated Learning Unsafe?
论文作者
论文摘要
联合学习(FL)允许对AI模型进行协作培训,而无需共享原始数据。这种功能使患者和数据隐私最关心的医疗保健应用程序特别有趣。但是,最近关于从模型梯度反转的深度神经网络反转的著作引起了人们对FL在防止训练数据泄漏方面的安全性的担忧。在这项工作中,我们表明文献中提出的这些攻击在FL用例中是不切实际的,在FL用例中,客户的培训涉及更新批处理(BN)统计数据,并提供针对此类情况的新基线攻击。此外,我们提出了衡量和可视化FL中潜在数据泄漏的新方法。我们的工作是建立可再现的方法来测量FL中数据泄漏的方法的一步,可以帮助确定隐私保护技术(例如差异隐私和基于可量化的度量的模型精度)之间的最佳权衡。 代码可在https://nvidia.github.io/nvflare/research/quantifying-data-leakage上找到。
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data. This capability makes it especially interesting for healthcare applications where patient and data privacy is of utmost concern. However, recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data. In this work, we show that these attacks presented in the literature are impractical in FL use-cases where the clients' training involves updating the Batch Normalization (BN) statistics and provide a new baseline attack that works for such scenarios. Furthermore, we present new ways to measure and visualize potential data leakage in FL. Our work is a step towards establishing reproducible methods of measuring data leakage in FL and could help determine the optimal tradeoffs between privacy-preserving techniques, such as differential privacy, and model accuracy based on quantifiable metrics. Code is available at https://nvidia.github.io/NVFlare/research/quantifying-data-leakage.