论文标题

深远:通过深度模型检查来减轻联邦学习的后门攻击

DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection

论文作者

Rieger, Phillip, Nguyen, Thien Duc, Miettinen, Markus, Sadeghi, Ahmad-Reza

论文摘要

联合学习(FL)允许多个客户在其私人数据上协作培训神经网络(NN)模型,而无需透露数据。最近,引入了一些针对FL的有针对性的中毒攻击。这些攻击将后门注入结果模型,该模型允许对对手控制的输入进行错误分类。现有针对后门攻击的对策效率低下,通常只是旨在将偏离模型排除在聚合中。但是,这种方法还消除了具有偏差数据分布的客户的良性模型,从而导致汇总模型对此类客户的表现不佳。 为了解决这个问题,我们提出了DeepSight,这是一种用于减轻后门攻击的新型模型过滤方法。它基于三种新型技术,允许表征用于训练模型更新的数据的分布,并试图测量NNS内部结构和输出的细粒度差异。使用这些技术,Deepsight可以识别可疑的模型更新。我们还开发了一个可以准确聚类模型更新的方案。结合了两个组件的结果,DeepSight能够识别和消除含有具有高攻击影响的中毒模型的模型簇。我们还表明,可以通过现有的基于减肥的防御能力有效地减轻可能未发现的中毒模型的后门贡献。我们评估了深视觉的性能和有效性,并表明它可以减轻最新的后门攻击,对模型对良性数据的性能产生可观的影响。

Federated Learning (FL) allows multiple clients to collaboratively train a Neural Network (NN) model on their private data without revealing the data. Recently, several targeted poisoning attacks against FL have been introduced. These attacks inject a backdoor into the resulting model that allows adversary-controlled inputs to be misclassified. Existing countermeasures against backdoor attacks are inefficient and often merely aim to exclude deviating models from the aggregation. However, this approach also removes benign models of clients with deviating data distributions, causing the aggregated model to perform poorly for such clients. To address this problem, we propose DeepSight, a novel model filtering approach for mitigating backdoor attacks. It is based on three novel techniques that allow to characterize the distribution of data used to train model updates and seek to measure fine-grained differences in the internal structure and outputs of NNs. Using these techniques, DeepSight can identify suspicious model updates. We also develop a scheme that can accurately cluster model updates. Combining the results of both components, DeepSight is able to identify and eliminate model clusters containing poisoned models with high attack impact. We also show that the backdoor contributions of possibly undetected poisoned models can be effectively mitigated with existing weight clipping-based defenses. We evaluate the performance and effectiveness of DeepSight and show that it can mitigate state-of-the-art backdoor attacks with a negligible impact on the model's performance on benign data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源