论文标题
通过加强学习的公平联合学习框架
A Fair Federated Learning Framework With Reinforcement Learning
论文作者
论文摘要
联邦学习(FL)是一个范式,许多客户在中央服务器协调下进行培训模型,同时将培训数据保留在本地存储。但是,对不同客户的异质数据分布仍然是对主流FL算法的挑战,这可能会导致跨客户的融合缓慢,整体绩效降解和绩效不公平。为了解决这些问题,在本研究中,我们提出了一个称为PG-FFL的加强学习框架,该框架自动学习了一项政策,将聚合权重分配给客户。此外,我们建议利用Gini系数作为FL的公平性量度。更重要的是,我们在每个通信中应用了客户的GINI系数和验证精度,以构建强化学习的奖励功能。我们的PG-FFL也与许多现有的FL算法兼容。我们对各种数据集进行了广泛的实验,以验证框架的有效性。实验结果表明,我们的框架可以在整体性能,公平性和收敛速度方面胜过基线方法。
Federated learning (FL) is a paradigm where many clients collaboratively train a model under the coordination of a central server, while keeping the training data locally stored. However, heterogeneous data distributions over different clients remain a challenge to mainstream FL algorithms, which may cause slow convergence, overall performance degradation and unfairness of performance across clients. To address these problems, in this study we propose a reinforcement learning framework, called PG-FFL, which automatically learns a policy to assign aggregation weights to clients. Additionally, we propose to utilize Gini coefficient as the measure of fairness for FL. More importantly, we apply the Gini coefficient and validation accuracy of clients in each communication round to construct a reward function for the reinforcement learning. Our PG-FFL is also compatible to many existing FL algorithms. We conduct extensive experiments over diverse datasets to verify the effectiveness of our framework. The experimental results show that our framework can outperform baseline methods in terms of overall performance, fairness and convergence speed.