论文标题
花:一个友好的联邦学习研究框架
Flower: A Friendly Federated Learning Research Framework
论文作者
论文摘要
联合学习(FL)已成为边缘设备的一种有前途的技术,可以协作学习共享的预测模型,同时将其培训数据保留在设备上,从而将机器学习的能力从需要存储数据存储在云中的需求中。但是,在规模和系统异质性方面,FL都是很难实现实施的。尽管有许多研究框架可用于模拟FL算法,但它们不支持对异质边缘设备上可扩展的FL工作负载的研究。 在本文中,我们提出了Flower-一个全面的FL框架,通过提供新的设施来执行大型FL实验并考虑丰富的异质FL设备方案,从而将自己与现有平台区分开来。我们的实验表明,只使用一对高端GPU,花朵可以执行最高15m的客户量实验。然后,研究人员可以将实验无缝迁移到真实设备,以检查设计空间的其他部分。我们认为Flower为社区提供了FL研究和开发的关键新工具。
Federated Learning (FL) has emerged as a promising technique for edge devices to collaboratively learn a shared prediction model, while keeping their training data on the device, thereby decoupling the ability to do machine learning from the need to store the data in the cloud. However, FL is difficult to implement realistically, both in terms of scale and systems heterogeneity. Although there are a number of research frameworks available to simulate FL algorithms, they do not support the study of scalable FL workloads on heterogeneous edge devices. In this paper, we present Flower -- a comprehensive FL framework that distinguishes itself from existing platforms by offering new facilities to execute large-scale FL experiments and consider richly heterogeneous FL device scenarios. Our experiments show Flower can perform FL experiments up to 15M in client size using only a pair of high-end GPUs. Researchers can then seamlessly migrate experiments to real devices to examine other parts of the design space. We believe Flower provides the community with a critical new tool for FL study and development.