论文标题

通过共同知识转移分散的联邦学习

Decentralized Federated Learning via Mutual Knowledge Transfer

论文作者

Li, Chengxi, Li, Gang, Varshney, Pramod K.

论文摘要

在本文中,我们研究了物联网(IoT)系统中分散的联合学习(DFL)的问题,其中许多物联网客户在没有中央服务器的情况下共同培训了用于共同任务的模型,而无需共享其私人培训数据。大多数现有的DFL方案都由两个交替步骤组成,即模型更新和模型平均。但是,将模型参数直接平均以融合本地客户端的不同模型会遭受客户端饮用,尤其是当培训数据之间的不同客户端异质时。这会导致趋势缓慢并降低学习绩效。作为一个可能的解决方案,我们通过相互知识转移(DEF-KT)算法提出了分散的联邦收入,其中本地客户通过将其学习的知识彼此转移来融合模型。我们对MNIST,时尚摄影师,CIFAR-10和CIFAR-100数据集进行的实验表明,所提出的Def-KT算法显着优于具有模型平均的基线DFL方法,即Combo和FullAvg,尤其是当培训数据不是独立的和相同分布的(非独立客户)的情况下(跨不同的客户)。

In this paper, we investigate the problem of decentralized federated learning (DFL) in Internet of things (IoT) systems, where a number of IoT clients train models collectively for a common task without sharing their private training data in the absence of a central server. Most of the existing DFL schemes are composed of two alternating steps, i.e., model updating and model averaging. However, averaging model parameters directly to fuse different models at the local clients suffers from client-drift especially when the training data are heterogeneous across different clients. This leads to slow convergence and degraded learning performance. As a possible solution, we propose the decentralized federated earning via mutual knowledge transfer (Def-KT) algorithm where local clients fuse models by transferring their learnt knowledge to each other. Our experiments on the MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100 datasets reveal that the proposed Def-KT algorithm significantly outperforms the baseline DFL methods with model averaging, i.e., Combo and FullAvg, especially when the training data are not independent and identically distributed (non-IID) across different clients.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源