论文标题

ADABEST:通过自适应偏置估计,最大程度地减少联邦学习中的客户漂移

AdaBest: Minimizing Client Drift in Federated Learning via Adaptive Bias Estimation

论文作者

Varno, Farshid, Saghayi, Marzie, Sevyeri, Laya Rafiee, Gupta, Sharut, Matwin, Stan, Havaei, Mohammad

论文摘要

在联合学习(FL)中,许多客户或设备在不共享数据的情况下协作培训模型。模型在每个客户端在本地进行了优化,并将其进一步通信到中央集线器进行聚合。尽管FL是一个有吸引力的分散培训范式,但来自不同客户的数据之间的异质性可能会导致本地优化从全球目标中消失。为了估计并消除这种漂移,最近已将差异技术纳入了FL优化。但是,这些方法不准确地估计客户的漂移,最终无法正确删除它。在这项工作中,我们提出了一种自适应算法,该算法可以准确地估计跨客户的漂移。与以前的工作相比,我们的方法需要更少的存储和通信带宽以及较低的计算成本。此外,我们提出的方法可以通过限制客户漂移的估计标准来引起稳定性,从而使大规模fl更实用。实验发现表明,所提出的算法比在各种FL基准中的基准相比,收敛速度明显更快,并且获得了更高的准确性。

In Federated Learning (FL), a number of clients or devices collaborate to train a model without sharing their data. Models are optimized locally at each client and further communicated to a central hub for aggregation. While FL is an appealing decentralized training paradigm, heterogeneity among data from different clients can cause the local optimization to drift away from the global objective. In order to estimate and therefore remove this drift, variance reduction techniques have been incorporated into FL optimization recently. However, these approaches inaccurately estimate the clients' drift and ultimately fail to remove it properly. In this work, we propose an adaptive algorithm that accurately estimates drift across clients. In comparison to previous works, our approach necessitates less storage and communication bandwidth, as well as lower compute costs. Additionally, our proposed methodology induces stability by constraining the norm of estimates for client drift, making it more practical for large scale FL. Experimental findings demonstrate that the proposed algorithm converges significantly faster and achieves higher accuracy than the baselines across various FL benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源