论文标题

对用户在联合学习中贡献的探索性分析

An Exploratory Analysis on Users' Contributions in Federated Learning

论文作者

Huang, Jiyue, Talbi, Rania, Zhao, Zilong, Boucchenak, Sara, Chen, Lydia Y., Roos, Stefanie

论文摘要

联合学习是一个新兴的分布式协作学习范式,如当今许多应用程序,例如键盘预测和对象识别。它的核心原则是从大量用户数据中学习,同时通过设计保留数据隐私,因为协作用户只需要共享机器学习模型并在本地保留数据。此类系统的主要挑战是为用户提供激励措施,以贡献从其本地数据训练的高质量模型。在本文中,我们旨在回答激励措施的诚实和恶意用户的准确当地模型,并感知其对联合学习系统模型准确性的影响。我们首先就两种对比观点进行了详尽的调查:衡量诚实用户的当地模型的贡献的激励机制,而恶意用户则故意降级整体模型。我们进行仿真实验,以凭经验证明现有的贡献测量方案是否可以披露恶意用户的低质量模型。我们的结果表明,就计算效率和提炼恶意参与​​者的影响而言,测量方案之间存在明确的权衡。我们通过讨论设计弹性贡献激励措施的研究方向来结束本文。

Federated Learning is an emerging distributed collaborative learning paradigm adopted by many of today's applications, e.g., keyboard prediction and object recognition. Its core principle is to learn from large amount of users data while preserving data privacy by design as collaborative users only need to share the machine learning models and keep data locally. The main challenge for such systems is to provide incentives to users to contribute high-quality models trained from their local data. In this paper, we aim to answer how well incentives recognize (in)accurate local models from honest and malicious users, and perceive their impacts on the model accuracy of federated learning systems. We first present a thorough survey on two contrasting perspectives: incentive mechanisms to measure the contribution of local models by honest users, and malicious users to deliberately degrade the overall model. We conduct simulation experiments to empirically demonstrate if existing contribution measurement schemes can disclose low-quality models from malicious users. Our results show there exists a clear tradeoff among measurement schemes in terms of the computational efficiency and effectiveness to distill the impact of malicious participants. We conclude this paper by discussing the research directions to design resilient contribution incentives.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源