论文标题

隐私保护机器学习培训在聚合方案中

Privacy-Preserving Machine Learning Training in Aggregation Scenarios

论文作者

Zhu, Liehuang, Tang, Xiangyun, Shen, Meng, Zhang, Jie, Du, Xiaojiang

论文摘要

为了发展智慧城市,欣赏由不同的物联网设备生成的高质量培训数据集的机器学习(ML)日益普及,提出了有关在此类环境中可以提供的隐私保证的自然问题。在聚合方案中,隐私保护ML培训使模型可以通过从个人IoT设备收集的敏感物联网数据安全地训练ML模型。现有的解决方案通常是服务器辅助的,无法处理服务器之间或服务器和数据所有者之间的勾结威胁,并且不匹配物联网的精致环境。我们提出了一个名为HEDA的隐私的ML培训框架,该框架由基于部分同型加密(PHE)组成的构件库组成,以促进在没有受理服务器的协助的情况下为串通服务的协助和在串通安全方面捍卫安全方案的多个隐私保护ML ML培训方案。严格的安全性分析表明,所提出的协议可以保护每个参与者在诚实而有趣的模型中的隐私,并在大多数勾结情况下捍卫安全性。广泛的实验验证了HEDA的效率,该实验可实现隐私保护ML训练而不会失去模型的准确性。

To develop Smart City, the growing popularity of Machine Learning (ML) that appreciates high-quality training datasets generated from diverse IoT devices raises natural questions about the privacy guarantees that can be provided in such settings. Privacy-preserving ML training in an aggregation scenario enables a model demander to securely train ML models with the sensitive IoT data gathered from personal IoT devices. Existing solutions are generally server-aided, cannot deal with the collusion threat between the servers or between the servers and data owners, and do not match the delicate environments of IoT. We propose a privacy-preserving ML training framework named Heda that consists of a library of building blocks based on partial homomorphic encryption (PHE) enabling constructing multiple privacy-preserving ML training protocols for the aggregation scenario without the assistance of untrusted servers and defending the security under collusion situations. Rigorous security analysis demonstrates the proposed protocols can protect the privacy of each participant in the honest-but-curious model and defend the security under most collusion situations. Extensive experiments validate the efficiency of Heda which achieves the privacy-preserving ML training without losing the model accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源