论文标题
FEDPOP:一种贝叶斯的个性化联合学习方法
FedPop: A Bayesian Approach for Personalised Federated Learning
论文作者
论文摘要
个性化联合学习(FL)旨在协作学习每个客户的机器学习模型。尽管已经朝这个方向取得了希望,但大多数现有方法的工作都不允许进行不确定性量化,这在许多应用中至关重要。此外,跨设备设置中的个性化仍然涉及重要问题,尤其是对于新客户或少量观察的客户。本文旨在填补这些空白。为此,我们提出了一种新的方法,通过将个性化的FL重新铸造到人群建模范式中,其中客户的模型涉及固定的共同种群参数和随机效应,旨在解释数据异质性。为了获得我们计划的融合保证,我们引入了一类新的联合随机优化算法,该算法依赖于马尔可夫链蒙特卡洛方法。与现有的个性化FL方法相比,所提出的方法具有重要的好处:对客户漂移是可靠的,对于推断新客户,最重要的是,在轻度的计算和内存开销中可以不确定性量化。我们为拟议算法提供非质合会收敛的保证,并说明了它们在各种个性化联合学习任务上的表现。
Personalised federated learning (FL) aims at collaboratively learning a machine learning model taylored for each client. Albeit promising advances have been made in this direction, most of existing approaches works do not allow for uncertainty quantification which is crucial in many applications. In addition, personalisation in the cross-device setting still involves important issues, especially for new clients or those having small number of observations. This paper aims at filling these gaps. To this end, we propose a novel methodology coined FedPop by recasting personalised FL into the population modeling paradigm where clients' models involve fixed common population parameters and random effects, aiming at explaining data heterogeneity. To derive convergence guarantees for our scheme, we introduce a new class of federated stochastic optimisation algorithms which relies on Markov chain Monte Carlo methods. Compared to existing personalised FL methods, the proposed methodology has important benefits: it is robust to client drift, practical for inference on new clients, and above all, enables uncertainty quantification under mild computational and memory overheads. We provide non-asymptotic convergence guarantees for the proposed algorithms and illustrate their performances on various personalised federated learning tasks.