论文标题

FedCross:通过多模型交叉聚集进行准确的联合学习

FedCross: Towards Accurate Federated Learning via Multi-Model Cross-Aggregation

论文作者

Hu, Ming, Zhou, Peiheng, Yue, Zhihao, Ling, Zhiwei, Huang, Yihao, Li, Anran, Liu, Yang, Lian, Xiang, Chen, Mingsong

论文摘要

作为一个有希望的分布式机器学习范式,联邦学习(FL)引起了人们对处理数据筒仓问题的越来越多的关注,而不会损害用户隐私。通过采用经典的一到潮汐培训计划(即FedAvg),云服务器将一个全局模型派遣到多个相关客户端,传统的FL方法可以实现协作模型培训而无需数据共享。但是,由于只有一个全球模型不能总是适应当地模型的所有不兼容的融合方向,因此现有的FL方法极大地遭受了劣质分类精度。为了解决这个问题,我们提出了一个名为FedCross的有效的FL框架,该框架使用我们建议的多模型交叉聚集方法,使用新型的多到摩尔蒂FL培训方案。与传统的FL方法不同,在每轮FL训练中,FedCross使用多种中间件模型单独进行加权融合。由于FedCross使用的中间件模型可以在损失景观方面迅速融合到相同的平坦山谷中,因此生成的全球模型可以实现良好的一般化。各种知名数据集的实验结果表明,与最先进的FL方法相比,FedCross可以显着提高IID和非IID场景中的FL精度,而不会引起其他额外的通信开销。

As a promising distributed machine learning paradigm, Federated Learning (FL) has attracted increasing attention to deal with data silo problems without compromising user privacy. By adopting the classic one-to-multi training scheme (i.e., FedAvg), where the cloud server dispatches one single global model to multiple involved clients, conventional FL methods can achieve collaborative model training without data sharing. However, since only one global model cannot always accommodate all the incompatible convergence directions of local models, existing FL approaches greatly suffer from inferior classification accuracy. To address this issue, we present an efficient FL framework named FedCross, which uses a novel multi-to-multi FL training scheme based on our proposed multi-model cross-aggregation approach. Unlike traditional FL methods, in each round of FL training, FedCross uses multiple middleware models to conduct weighted fusion individually. Since the middleware models used by FedCross can quickly converge into the same flat valley in terms of loss landscapes, the generated global model can achieve a well-generalization. Experimental results on various well-known datasets show that, compared with state-of-the-art FL methods, FedCross can significantly improve FL accuracy within both IID and non-IID scenarios without causing additional communication overhead.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源