论文标题

致力于增强联合学习网络的智能安全性

Toward Smart Security Enhancement of Federated Learning Networks

论文作者

Tan, Junjie, Liang, Ying-Chang, Luong, Nguyen Cong, Niyato, Dusit

论文摘要

由于传统的集中学习网络(CLN)在隐私保护,沟通开销和可扩展性方面面临着越来越多的挑战,因此已经提出了联合学习网络(FLN)作为支持机器学习培训(ML)模型的有希望的替代范式。与CLN中的集中数据存储和处理相反,FLNS利用了许多边缘设备(EDS)来存储数据并分发培训。通过这种方式,FLN中的ED可以在本地保持培训数据,从而保留隐私并减少沟通开销。但是,由于FLN中的模型培训依赖于所有ED的贡献,因此,如果某些ED上传不正确或伪造的训练结果,即中毒攻击,则可能会破坏培训过程。在本文中,我们回顾了FLN的脆弱性,特别是概述了中毒攻击和主流对策。然而,现有的对策只能提供被动保护,并且未能考虑为ED的捐款支付的培训费,从而导致了不必要的高培训成本。因此,我们为FLN提供了一个明智的安全增强框架。特别是,开发了一个验证 - 聚集(VBA)程序,以识别和删除ED中的非固定培训结果。之后,应用深入的增强学习(DRL)来学习ED的行为模式,并积极选择可以提供良性培训结果并收取低训练费用的ED。仿真结果表明,所提出的框架可以有效,有效地保护FLN。

As traditional centralized learning networks (CLNs) are facing increasing challenges in terms of privacy preservation, communication overheads, and scalability, federated learning networks (FLNs) have been proposed as a promising alternative paradigm to support the training of machine learning (ML) models. In contrast to the centralized data storage and processing in CLNs, FLNs exploit a number of edge devices (EDs) to store data and perform training distributively. In this way, the EDs in FLNs can keep training data locally, which preserves privacy and reduces communication overheads. However, since the model training within FLNs relies on the contribution of all EDs, the training process can be disrupted if some of the EDs upload incorrect or falsified training results, i.e., poisoning attacks. In this paper, we review the vulnerabilities of FLNs, and particularly give an overview of poisoning attacks and mainstream countermeasures. Nevertheless, the existing countermeasures can only provide passive protection and fail to consider the training fees paid for the contributions of the EDs, resulting in a unnecessarily high training cost. Hence, we present a smart security enhancement framework for FLNs. In particular, a verify-before-aggregate (VBA) procedure is developed to identify and remove the non-benign training results from the EDs. Afterward, deep reinforcement learning (DRL) is applied to learn the behaving patterns of the EDs and to actively select the EDs that can provide benign training results and charge low training fees. Simulation results reveal that the proposed framework can protect FLNs effectively and efficiently.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源