论文标题

部分可观测时空混沌系统的无模型预测

One-shot Federated Learning without Server-side Training

论文作者

Su, Shangchao, Li, Bin, Xue, Xiangyang

论文摘要

Federated学习(FL)最近作为用于隐私保护的新机器学习范式取得了重大进展。由于传统FL的沟通成本很高,一击联合学习的流行度是一种降低客户与服务器之间的沟通成本的一种方式。大多数现有的单发方法基于知识蒸馏。但是,{基于蒸馏的方法需要额外的训练阶段,并取决于公开可用的数据集或生成的伪样品。}在这项工作中,我们考虑了一种新颖且具有挑战性的跨核心设置:在没有服务器端训练的本地模型上执行一轮参数聚合。在这种情况下,我们通过探索通用的统一Optima(MA-ECHO)提出了一种有效的模型聚合算法,该算法迭代地更新了所有本地模型的参数,以使它们接近损失表面上的常见低损失区域,而不会同时损害其自身数据集的性能。与现有方法相比,即使在非常非相同的数据分发设置中,MA-ECHO也可以很好地工作,因为每个本地模型的支持类别都没有与其他模型的标签重叠的标签。我们对两个流行的图像分类数据集进行了广泛的实验,以将所提出的方法与现有方法进行比较,并证明了Ma-echo的有效性,这显然优于最先进的方法。可以在\ url {https://github.com/fudanvi/maecho}中访问源代码。

Federated Learning (FL) has recently made significant progress as a new machine learning paradigm for privacy protection. Due to the high communication cost of traditional FL, one-shot federated learning is gaining popularity as a way to reduce communication cost between clients and the server. Most of the existing one-shot FL methods are based on Knowledge Distillation; however, {distillation based approach requires an extra training phase and depends on publicly available data sets or generated pseudo samples.} In this work, we consider a novel and challenging cross-silo setting: performing a single round of parameter aggregation on the local models without server-side training. In this setting, we propose an effective algorithm for Model Aggregation via Exploring Common Harmonized Optima (MA-Echo), which iteratively updates the parameters of all local models to bring them close to a common low-loss area on the loss surface, without harming performance on their own data sets at the same time. Compared to the existing methods, MA-Echo can work well even in extremely non-identical data distribution settings where the support categories of each local model have no overlapped labels with those of the others. We conduct extensive experiments on two popular image classification data sets to compare the proposed method with existing methods and demonstrate the effectiveness of MA-Echo, which clearly outperforms the state-of-the-arts. The source code can be accessed in \url{https://github.com/FudanVI/MAEcho}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源