论文标题

ADI:垂直联合学习系统中的对抗性主导输入

ADI: Adversarial Dominating Inputs in Vertical Federated Learning Systems

论文作者

Pang, Qi, Yuan, Yuanyuan, Wang, Shuai, Zheng, Wenting

论文摘要

垂直联合学习(VFL)系统最近作为处理分布在许多单个来源的数据而无需集中的数据的概念变得突出。多个参与者以隐私感知方式根据其本地数据进行培训模型。迄今为止,VFL已成为一种事实上的解决方案,可以在组织之间安全地学习模型,从而可以共享知识而不会损害任何个人的隐私。尽管VFL系统发展了繁荣的发展,但我们发现,参与者的某些投入(名为“对抗性主导”意见(ADIS))可以主导朝着对手意愿的方向的共同推断,并迫使其他(受害者)参与者做出可忽略的贡献,而损失了通常在其在Federed Learnerated学习方面贡献的重要性。我们首先证明它们在典型的VFL系统中的存在,对ADI进行系统的研究。然后,我们提出了基于梯度的方法来合成各种格式的ADI并利用通用VFL系统。我们进一步启动了灰箱绒毛测试,以``受害者''参与者的显着性得分为指导,以扰动对手控制的输入,并以隐私保护方式系统地探索VFL攻击表面。我们对关键参数和设置在综合ADIS中的影响的深入研究进行了深入研究。我们的研究揭示了新的VFL攻击机会,在违规之前促进了未知威胁的识别并构建更安全的VFL系统。

Vertical federated learning (VFL) system has recently become prominent as a concept to process data distributed across many individual sources without the need to centralize it. Multiple participants collaboratively train models based on their local data in a privacy-aware manner. To date, VFL has become a de facto solution to securely learn a model among organizations, allowing knowledge to be shared without compromising privacy of any individuals. Despite the prosperous development of VFL systems, we find that certain inputs of a participant, named adversarial dominating inputs (ADIs), can dominate the joint inference towards the direction of the adversary's will and force other (victim) participants to make negligible contributions, losing rewards that are usually offered regarding the importance of their contributions in federated learning scenarios. We conduct a systematic study on ADIs by first proving their existence in typical VFL systems. We then propose gradient-based methods to synthesize ADIs of various formats and exploit common VFL systems. We further launch greybox fuzz testing, guided by the saliency score of ``victim'' participants, to perturb adversary-controlled inputs and systematically explore the VFL attack surface in a privacy-preserving manner. We conduct an in-depth study on the influence of critical parameters and settings in synthesizing ADIs. Our study reveals new VFL attack opportunities, promoting the identification of unknown threats before breaches and building more secure VFL systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源