论文标题
自适应个性化联合学习
Adaptive Personalized Federated Learning
论文作者
论文摘要
对联邦学习算法中个性化程度的调查表明,只有最大化全球模型的性能才能限制本地模型的个性化能力。在本文中,我们主张一种自适应的个性化联合学习(APFL)算法,每个客户将在为全球模型做出贡献的同时培训其本地模型。我们得出了局部和全局模型混合物的概括结合,并找到最佳混合参数。我们还提出了一种沟通高效的优化方法,以协作学习个性化模型,并在平稳的强烈凸和非convex设置中分析其收敛性。广泛的实验证明了我们个性化模式的有效性以及既定的概括理论的正确性。
Investigation of the degree of personalization in federated learning algorithms has shown that only maximizing the performance of the global model will confine the capacity of the local models to personalize. In this paper, we advocate an adaptive personalized federated learning (APFL) algorithm, where each client will train their local models while contributing to the global model. We derive the generalization bound of mixture of local and global models, and find the optimal mixing parameter. We also propose a communication-efficient optimization method to collaboratively learn the personalized models and analyze its convergence in both smooth strongly convex and nonconvex settings. The extensive experiments demonstrate the effectiveness of our personalization schema, as well as the correctness of established generalization theories.