论文标题

垂直联合学习中对模型预测的特征推理攻击

Feature Inference Attack on Model Predictions in Vertical Federated Learning

论文作者

Luo, Xinjian, Wu, Yuncheng, Xiao, Xiaokui, Ooi, Beng Chin

论文摘要

联合学习(FL)是一种新兴的范式,用于促进多个组织的数据协作而不彼此揭示其私人数据。最近,参与组织的垂直佛罗里达州拥有相同的样本,但具有不相交的功能,只有一个组织拥有该标签,因此受到了越来越多的关注。本文提出了几种特征推理攻击方法,以研究垂直FL的模型预测阶段的潜在隐私泄漏。攻击方法考虑了对手仅控制经过训练的垂直FL模型和模型预测的最严格的设置,该设置依赖于没有背景信息。我们首先根据各个预测输出对逻辑回归(LR)和决策树(DT)模型提出了两次特定攻击。我们进一步设计了一种基于对手积累的多个预测输出,以处理复杂模型,例如神经网络(NN)和随机森林(RF)模型。实验评估证明了拟议攻击的有效性,并强调了设计私人机制以保护垂直FL中的预测输出的必要性。

Federated learning (FL) is an emerging paradigm for facilitating multiple organizations' data collaboration without revealing their private data to each other. Recently, vertical FL, where the participating organizations hold the same set of samples but with disjoint features and only one organization owns the labels, has received increased attention. This paper presents several feature inference attack methods to investigate the potential privacy leakages in the model prediction stage of vertical FL. The attack methods consider the most stringent setting that the adversary controls only the trained vertical FL model and the model predictions, relying on no background information. We first propose two specific attacks on the logistic regression (LR) and decision tree (DT) models, according to individual prediction output. We further design a general attack method based on multiple prediction outputs accumulated by the adversary to handle complex models, such as neural networks (NN) and random forest (RF) models. Experimental evaluations demonstrate the effectiveness of the proposed attacks and highlight the need for designing private mechanisms to protect the prediction outputs in vertical FL.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源