论文标题

FedPrompt:沟通效率和隐私保存在联邦学习中及时调整

FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning in Federated Learning

论文作者

Zhao, Haodong, Du, Wei, Li, Fangqi, Li, Peixuan, Liu, Gongshen

论文摘要

联合学习(FL)通过汇总模型更新以保护分散数据的方式实现了对分散数据的全球模型培训。但是,对于使用具有大量参数的预训练的语言模型(PLM)的许多自然语言处理(NLP)任务,与FL相关的沟通成本相当大。最近,及时调整了一些软提示而不修改PLM的调音,它作为新的学习范式取得了出色的表现。因此,我们要组合两种方法,并探索在FL下迅速调整的效果。在本文中,我们建议“ FedPrompt”使用FL以模型分开聚合方式研究及时调整,并证明分裂聚合大大降低了通信成本,只有PLMS参数的0.01%,而IID和非IID数据分布的准确性很小。这提高了FL方法的效率,同时还可以在迅速调整中保护数据隐私。此外,像PLM一样,提示在公共平台和个人用户之间上载和下载,因此我们试图弄清楚是否仍然在FL方案中仅使用软提示存在后门威胁。我们通过在FedPrompt上进行数据中毒进一步进行后门攻击。我们的实验表明,正常的后门攻击无法实现高攻击成功率,证明了FedPrompt的稳健性。我们希望这项工作可以促进FL的提示的应用,并提高人们对可能的安全威胁的认识。

Federated learning (FL) has enabled global model training on decentralized data in a privacy-preserving way by aggregating model updates. However, for many natural language processing (NLP) tasks that utilize pre-trained language models (PLMs) with large numbers of parameters, there are considerable communication costs associated with FL. Recently, prompt tuning, which tunes some soft prompts without modifying PLMs, has achieved excellent performance as a new learning paradigm. Therefore we want to combine the two methods and explore the effect of prompt tuning under FL. In this paper, we propose "FedPrompt" to study prompt tuning in a model split aggregation way using FL, and prove that split aggregation greatly reduces the communication cost, only 0.01% of the PLMs' parameters, with little decrease on accuracy both on IID and Non-IID data distribution. This improves the efficiency of FL method while also protecting the data privacy in prompt tuning. In addition, like PLMs, prompts are uploaded and downloaded between public platforms and personal users, so we try to figure out whether there is still a backdoor threat using only soft prompts in FL scenarios. We further conduct backdoor attacks by data poisoning on FedPrompt. Our experiments show that normal backdoor attack can not achieve a high attack success rate, proving the robustness of FedPrompt. We hope this work can promote the application of prompt in FL and raise the awareness of the possible security threats.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源