论文标题
联合和集中学习的混合动力
Hybrid Federated and Centralized Learning
论文作者
论文摘要
许多机器学习(ML)任务都集中在集中学习(CL)上,这需要将本地数据集从客户端传输到参数服务器(PS)(PS),从而导致巨大的通信开销。联合学习(FL)通过允许客户仅将模型更新发送到PS而不是整个数据集来克服此问题。这样,佛罗里达州将学习到边缘水平,其中客户端需要强大的计算资源。由于边缘设备的各种计算能力,这一要求可能并不总是满足。我们通过一种新型的联合和集中学习(HFCL)框架来解决这一问题,以通过利用客户的计算能力来有效地训练学习模型。在HFCL中,只有拥有足够资源的客户雇用FL;其余的客户通过将其本地数据集传输到PS来求助于CL。这允许所有客户在学习过程中进行协作,无论其计算资源如何。我们还建议使用HFCL(HFCL-SDT)进行连续数据传输方法,以减少训练持续时间。提议的HFCL框架在学习准确性(通信开销)方面优于先前提出的基于非杂交FL(CL)方案(CL)方案,因为所有客户均在学习过程上与数据集协作,无论其计算资源如何。
Many of the machine learning (ML) tasks are focused on centralized learning (CL), which requires the transmission of local datasets from the clients to a parameter server (PS) leading to a huge communication overhead. Federated learning (FL) overcomes this issue by allowing the clients to send only the model updates to the PS instead of the whole dataset. In this way, FL brings the learning to edge level, wherein powerful computational resources are required on the client side. This requirement may not always be satisfied because of diverse computational capabilities of edge devices. We address this through a novel hybrid federated and centralized learning (HFCL) framework to effectively train a learning model by exploiting the computational capability of the clients. In HFCL, only the clients who have sufficient resources employ FL; the remaining clients resort to CL by transmitting their local dataset to PS. This allows all the clients to collaborate on the learning process regardless of their computational resources. We also propose a sequential data transmission approach with HFCL (HFCL-SDT) to reduce the training duration. The proposed HFCL frameworks outperform previously proposed non-hybrid FL (CL) based schemes in terms of learning accuracy (communication overhead) since all the clients collaborate on the learning process with their datasets regardless of their computational resources.