论文标题
沟通有效的自适应联合学习
Communication-Efficient Adaptive Federated Learning
论文作者
论文摘要
Federated Learning是一种机器学习培训范式,它使客户能够共同培训模型而无需共享自己的局部数据。但是,实践中联邦学习的实施仍然面临许多挑战,例如由于重复的服务器 - 客户同步以及基于SGD的模型更新缺乏适应性,大型通信开销。尽管已经提出了各种方法来通过梯度压缩或量化来降低通信成本,并且提出了联合版本的自适应优化器(例如Fedadam)来增加适应性,目前的联合学习框架仍然无法一次解决上述挑战。在本文中,我们提出了一种具有理论融合保证的新型沟通自适应联合学习方法(FedCams)。我们表明,在非convex随机优化设置中,我们提出的fedcams的收敛速率(\ frac {1} {\ sqrt {tkm}})与其非压缩的对应物相同。各种基准的广泛实验验证了我们的理论分析。
Federated learning is a machine learning training paradigm that enables clients to jointly train models without sharing their own localized data. However, the implementation of federated learning in practice still faces numerous challenges, such as the large communication overhead due to the repetitive server-client synchronization and the lack of adaptivity by SGD-based model updates. Despite that various methods have been proposed for reducing the communication cost by gradient compression or quantization, and the federated versions of adaptive optimizers such as FedAdam are proposed to add more adaptivity, the current federated learning framework still cannot solve the aforementioned challenges all at once. In this paper, we propose a novel communication-efficient adaptive federated learning method (FedCAMS) with theoretical convergence guarantees. We show that in the nonconvex stochastic optimization setting, our proposed FedCAMS achieves the same convergence rate of $O(\frac{1}{\sqrt{TKm}})$ as its non-compressed counterparts. Extensive experiments on various benchmarks verify our theoretical analysis.