论文标题

通过变分贝叶斯推断的个性化联合学习

Personalized Federated Learning via Variational Bayesian Inference

论文作者

Zhang, Xu, Li, Yinchuan, Li, Wenpeng, Guo, Kaiyang, Shao, Yunfeng

论文摘要

由于客户之间缺乏数据和统计多样性,联合学习从模型过度适应的巨大挑战面临巨大的挑战。为了应对这些挑战,本文提出了一种新型的个性化联合学习方法,该方法通过贝叶斯变异推断为pfedbayes。为了减轻过度拟合,将重量不确定性引入了客户和服务器的神经网络。为了实现个性化,每个客户端通过平衡私有数据的构建错误以及KL差异与服务器的全局分布来更新其本地分布参数。理论分析给出了平均泛化误差的上限,并说明了概括误差的收敛速率是最小到对数因子的最小值。实验表明,所提出的方法在个性化模型上的表现优于其他高级个性化方法,例如Pfedbayes在MNIST,FMNIST和NON-I.I.I.I.D下的MNIST,FMNIST和CIFAR-10上的其他SOTA算法的表现分别优于其他SOTA算法的其他SOTA算法。有限的数据。

Federated learning faces huge challenges from model overfitting due to the lack of data and statistical diversity among clients. To address these challenges, this paper proposes a novel personalized federated learning method via Bayesian variational inference named pFedBayes. To alleviate the overfitting, weight uncertainty is introduced to neural networks for clients and the server. To achieve personalization, each client updates its local distribution parameters by balancing its construction error over private data and its KL divergence with global distribution from the server. Theoretical analysis gives an upper bound of averaged generalization error and illustrates that the convergence rate of the generalization error is minimax optimal up to a logarithmic factor. Experiments show that the proposed method outperforms other advanced personalized methods on personalized models, e.g., pFedBayes respectively outperforms other SOTA algorithms by 1.25%, 0.42% and 11.71% on MNIST, FMNIST and CIFAR-10 under non-i.i.d. limited data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源