论文标题

联合学习中对模型汇总的自由骑手攻击

Free-rider Attacks on Model Aggregation in Federated Learning

论文作者

Fraboni, Yann, Vidal, Richard, Lorenzi, Marco

论文摘要

针对联邦学习的自由骑士攻击包括将参与的参与置于联合学习过程中,目的是获得最终的汇总模型而无需实际贡献任何数据。这种攻击对于联合学习的敏感应用至关重要,在该学习的敏感应用中,数据稀缺并且该模型具有很高的商业价值。我们在这里介绍了基于迭代参数聚合(例如FedAvg或FedProx)对联合学习方案的自由骑士攻击的第一个理论和实验分析,并为这些攻击提供正式的保证,以使这些攻击融合到公平参与者的汇总模型。我们首先表明,可以简单地实现此攻击,可以通过在迭代联合优化期间不更新本地参数来实现。由于可以通过在服务器级别采用简单的对策来检测此攻击,因此我们随后根据自由骑士参数的随机更新研究了更复杂的伪装方案。我们在IID和非IID设置中都展示了许多实验场景的拟议策略。我们通过提供建议来避免在联合学习的现实世界应用中避免自由行者攻击,尤其是在数据和模型安全至关重要的敏感领域中。

Free-rider attacks against federated learning consist in dissimulating participation to the federated learning process with the goal of obtaining the final aggregated model without actually contributing with any data. This kind of attacks is critical in sensitive applications of federated learning, where data is scarce and the model has high commercial value. We introduce here the first theoretical and experimental analysis of free-rider attacks on federated learning schemes based on iterative parameters aggregation, such as FedAvg or FedProx, and provide formal guarantees for these attacks to converge to the aggregated models of the fair participants. We first show that a straightforward implementation of this attack can be simply achieved by not updating the local parameters during the iterative federated optimization. As this attack can be detected by adopting simple countermeasures at the server level, we subsequently study more complex disguising schemes based on stochastic updates of the free-rider parameters. We demonstrate the proposed strategies on a number of experimental scenarios, in both iid and non-iid settings. We conclude by providing recommendations to avoid free-rider attacks in real world applications of federated learning, especially in sensitive domains where security of data and models is critical.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源