论文标题

基于非凸成对融合的聚类联合学习

Clustered Federated Learning based on Nonconvex Pairwise Fusion

论文作者

Yu, Xue, Liu, Ziyi, Wang, Wu, Sun, Yifan

论文摘要

这项研究调查了联合学习聚类(FL),这是非i.i.d的FL的制剂之一。数据,将设备分配到群集中,每个群集与局部模型最佳地拟合其数据。我们提出了一个聚类的FL框架,该框架将非凸的惩罚纳入了参数的成对差异。如果没有对每个群集中的一组设备和集群数量的知识,则该框架可以自主估计集群结构。为了实现所提出的框架,我们引入了一种新型的聚类FL方法,称为Fusion惩罚联合聚类(FPFC)。 FPFC以乘数的标准交流方向方法(ADMM)为基础,可以在每个通信过程中执行部分更新,并允许使用可变工作负载并行计算。这些策略在确保隐私的同时大大降低了通信成本,使其对FL进行了实用。我们还提出了一种新的热身策略,用于在FL设置中进行超参数调整,并探索FPFC(asyncfPFC)的异步变体。理论分析为FPFC提供了一般损失的融合保证,并在线性模型下建立了统计收敛速率,并具有平方损耗。与当前方法相比,广泛的实验证明了FPFC的优势,包括鲁棒性和泛化能力。

This study investigates clustered federated learning (FL), one of the formulations of FL with non-i.i.d. data, where the devices are partitioned into clusters and each cluster optimally fits its data with a localized model. We propose a clustered FL framework that incorporates a nonconvex penalty to pairwise differences of parameters. Without a priori knowledge of the set of devices in each cluster and the number of clusters, this framework can autonomously estimate cluster structures. To implement the proposed framework, we introduce a novel clustered FL method called Fusion Penalized Federated Clustering (FPFC). Building upon the standard alternating direction method of multipliers (ADMM), FPFC can perform partial updates at each communication round and allows parallel computation with variable workload. These strategies significantly reduce the communication cost while ensuring privacy, making it practical for FL. We also propose a new warmup strategy for hyperparameter tuning in FL settings and explore the asynchronous variant of FPFC (asyncFPFC). Theoretical analysis provides convergence guarantees for FPFC with general losses and establishes the statistical convergence rate under a linear model with squared loss. Extensive experiments have demonstrated the superiority of FPFC compared to current methods, including robustness and generalization capability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源