论文标题
区块链协助分散的联邦学习(Blade-FL)与懒惰的客户
Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with Lazy Clients
论文作者
论文摘要
作为分布式机器学习方法,联邦学习(FL)近年来引起了很多关注。由于用户的原始数据在本地处理,因此FL显示了隐私保护方面的固有优势。但是,它依靠集中式服务器来执行模型聚合。因此,FL容易受到服务器故障和外部攻击的影响。在本文中,我们通过将区块链整合到FL中,即区块链协助分散的联邦学习(Blade-FL)提出一个新颖的框架,以增强FL的安全性。拟议的Blade-FL在隐私保护,防篡改和有效的学习合作方面具有良好的表现。但是,这引起了训练不足的新问题,这是由于懒惰的客户pla窃,他们窃了他人训练有素的模型,并增加了人造噪音以掩盖其作弊行为。具体来说,我们首先将损失功能的融合限制与懒惰客户的存在,并证明它与生成的块的总数$ k $相对于凸面。然后,我们通过优化$ K $来最大程度地减少损失功能来解决凸问题。此外,我们发现了最佳$ k $,懒惰客户的数量和懒惰客户使用的人造噪声的力量之间的关系。我们进行了广泛的实验,以评估使用MNIST和时尚界数据集的框架的性能。我们的分析结果已显示与实验结果一致。此外,派生的最佳$ k $还达到了损失功能的最小值,进而达到了最佳精度性能。
Federated learning (FL), as a distributed machine learning approach, has drawn a great amount of attention in recent years. FL shows an inherent advantage in privacy preservation, since users' raw data are processed locally. However, it relies on a centralized server to perform model aggregation. Therefore, FL is vulnerable to server malfunctions and external attacks. In this paper, we propose a novel framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL), to enhance the security of FL. The proposed BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning. However, it gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors. To be specific, we first develop a convergence bound of the loss function with the presence of lazy clients and prove that it is convex with respect to the total number of generated blocks $K$. Then, we solve the convex problem by optimizing $K$ to minimize the loss function. Furthermore, we discover the relationship between the optimal $K$, the number of lazy clients, and the power of artificial noises used by lazy clients. We conduct extensive experiments to evaluate the performance of the proposed framework using the MNIST and Fashion-MNIST datasets. Our analytical results are shown to be consistent with the experimental results. In addition, the derived optimal $K$ achieves the minimum value of loss function, and in turn the optimal accuracy performance.