论文标题
在联合学习中个性化的自适应专家模型
Adaptive Expert Models for Personalization in Federated Learning
论文作者
论文摘要
当数据是私人且敏感时,联合学习(FL)是分布式学习的有前途的框架。但是,当数据是异质且非独立且分布相同(非IID)时,此框架中最新的解决方案并不是最佳的。我们提出了一种实用且强大的个性化方法,可以通过平衡探索和利用几种全球模型来适应异质和非IID数据。为了实现我们的个性化目的,我们使用了专家(MOE)的混合物,这些专家(MOE)学会了分组彼此相似的客户,同时更有效地使用全球模型。我们表明,与病理非IID环境中的本地模型相比,我们的方法的准确性高达29.78%,高达4.38%,即使我们在IID环境中调整了方法。
Federated Learning (FL) is a promising framework for distributed learning when data is private and sensitive. However, the state-of-the-art solutions in this framework are not optimal when data is heterogeneous and non-Independent and Identically Distributed (non-IID). We propose a practical and robust approach to personalization in FL that adjusts to heterogeneous and non-IID data by balancing exploration and exploitation of several global models. To achieve our aim of personalization, we use a Mixture of Experts (MoE) that learns to group clients that are similar to each other, while using the global models more efficiently. We show that our approach achieves an accuracy up to 29.78 % and up to 4.38 % better compared to a local model in a pathological non-IID setting, even though we tune our approach in the IID setting.