论文标题
通过加速客户梯度的沟通效率的联合学习
Communication-Efficient Federated Learning with Accelerated Client Gradient
论文作者
论文摘要
由于参与客户数据集的异质特征,联合学习通常患有缓慢和不稳定的收敛性。当客户参与率较低时,这种趋势会加剧,因为从客户那里收集的信息有很大的差异。为了应对这一挑战,我们提出了一个简单但有效的联合学习框架,该框架提高了客户端的一致性并促进了服务器模型的收敛性。这是通过使服务器广播具有LookAhead梯度的全局模型来实现的。该策略使提出的方法可以有效地将预计的全局更新信息传达给参与者,而无需其他客户记忆和额外的沟通成本。我们还通过使每个客户端与“越图全局”模型对准本地更新,以减少偏见并提高算法的稳定性。我们提供算法的理论收敛速率,并与最先进的方法相比,在准确性和沟通效率方面表现出显着的性能提高,尤其是客户参与率低。源代码可在我们的项目页面上找到。
Federated learning often suffers from slow and unstable convergence due to the heterogeneous characteristics of participating client datasets. Such a tendency is aggravated when the client participation ratio is low since the information collected from the clients has large variations. To address this challenge, we propose a simple but effective federated learning framework, which improves the consistency across clients and facilitates the convergence of the server model. This is achieved by making the server broadcast a global model with a lookahead gradient. This strategy enables the proposed approach to convey the projected global update information to participants effectively without additional client memory and extra communication costs. We also regularize local updates by aligning each client with the overshot global model to reduce bias and improve the stability of our algorithm. We provide the theoretical convergence rate of our algorithm and demonstrate remarkable performance gains in terms of accuracy and communication efficiency compared to the state-of-the-art methods, especially with low client participation rates. The source code is available at our project page.