论文标题
编码的联合学习
Coded Federated Learning
论文作者
论文摘要
联合学习是一种从分散在客户端设备上分布的分散数据的全球模型的方法。在这里,模型参数是由每个客户端设备在本地计算的,并与中央服务器进行交换,中央服务器将本地模型汇总为全局视图,而无需共享培训数据。联合学习的收敛性能在异质计算平台(例如在无线边缘的那些平台上)受到严重影响,在该平台上,散布的计算和通信链接可以显着限制及时的及时模型参数更新。本文开发了一种新颖的编码计算技术,用于联合学习,以减轻散乱者的影响。在拟议的编码联合学习(CFL)方案中,每个客户端设备私下生成了奇偶校验培训数据,并在培训阶段开始时仅与中央服务器共享一次。然后,中央服务器可以在复合奇偶校验数据上进行冗余梯度计算,以补偿删除或延迟的参数更新。我们的结果表明,与未编码的方法相比,CFL允许全局模型收敛近四倍
Federated learning is a method of training a global model from decentralized data distributed across client devices. Here, model parameters are computed locally by each client device and exchanged with a central server, which aggregates the local models for a global view, without requiring sharing of training data. The convergence performance of federated learning is severely impacted in heterogeneous computing platforms such as those at the wireless edge, where straggling computations and communication links can significantly limit timely model parameter updates. This paper develops a novel coded computing technique for federated learning to mitigate the impact of stragglers. In the proposed Coded Federated Learning (CFL) scheme, each client device privately generates parity training data and shares it with the central server only once at the start of the training phase. The central server can then preemptively perform redundant gradient computations on the composite parity data to compensate for the erased or delayed parameter updates. Our results show that CFL allows the global model to converge nearly four times faster when compared to an uncoded approach