论文标题

通过循环聚集的快速结合联合学习

Fast-Convergent Federated Learning via Cyclic Aggregation

论文作者

Lee, Youngjoon, Park, Sangwoo, Kang, Joonhyuk

论文摘要

联合学习(FL)的目的是通过多个边缘设备优化共享的全局模型,而无需将(私有)数据传输到中央服务器。虽然理论上众所周知,FL会产生一个最佳模型 - 在中央服务器上所有边缘设备数据的可用性,但在轻度条件下,在统计/计算异质性的存在下,通常需要大量的迭代次数。本文利用服务器端的循环学习率,以减少训练迭代的数量,而性能提高,而服务器和边缘设备都没有任何额外的计算成本。数值结果可以验证,只需将提出的循环聚集插入现有的FL算法即可有效地减少训练迭代的数量,并提高了性能。

Federated learning (FL) aims at optimizing a shared global model over multiple edge devices without transmitting (private) data to the central server. While it is theoretically well-known that FL yields an optimal model -- centrally trained model assuming availability of all the edge device data at the central server -- under mild condition, in practice, it often requires massive amount of iterations until convergence, especially under presence of statistical/computational heterogeneity. This paper utilizes cyclic learning rate at the server side to reduce the number of training iterations with increased performance without any additional computational costs for both the server and the edge devices. Numerical results validate that, simply plugging-in the proposed cyclic aggregation to the existing FL algorithms effectively reduces the number of training iterations with improved performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源