论文标题

Intel SGX的水平联合学习和安全分布式培训用于推荐系统

Horizontal Federated Learning and Secure Distributed Training for Recommendation System with Intel SGX

论文作者

Hui, Siyuan, Zhang, Yuqiu, Hu, Albert, Song, Edmund

论文摘要

随着大数据时代的出现以及人工智能和其他技术的发展,数据安全和隐私保护变得越来越重要。推荐系统在我们的社会中有许多应用程序,但是建议系统的模型构建通常与用户数据密不可分。特别是对于基于深度学习的推荐系统,由于模型的复杂性和深度学习本身的特征,其培训过程不仅需要长时间的培训时间和丰富的计算资源,而且还需要使用大量用户数据,这在数据安全和隐私保护方面构成了巨大的挑战。如何在确保数据安全性的同时训练分布式建议系统已成为要解决的紧迫问题。在本文中,我们基于Intel SGX(软件保护扩展)(可信赖的执行环境的实现以及TensorFlow框架)实施两个方案,即水平联合学习和安全分布式培训,以实现不同场景中的安全,分布式建议基于系统的学习方案。我们在经典的深度学习推荐模型(DLRM)上实验,该模型是一种基于神经网络的机器学习模型,旨在个性化和建议,结果表明,我们的实施介绍了模型性能的大约没有损失。训练速度在可接受的范围内。

With the advent of big data era and the development of artificial intelligence and other technologies, data security and privacy protection have become more important. Recommendation systems have many applications in our society, but the model construction of recommendation systems is often inseparable from users' data. Especially for deep learning-based recommendation systems, due to the complexity of the model and the characteristics of deep learning itself, its training process not only requires long training time and abundant computational resources but also needs to use a large amount of user data, which poses a considerable challenge in terms of data security and privacy protection. How to train a distributed recommendation system while ensuring data security has become an urgent problem to be solved. In this paper, we implement two schemes, Horizontal Federated Learning and Secure Distributed Training, based on Intel SGX(Software Guard Extensions), an implementation of a trusted execution environment, and TensorFlow framework, to achieve secure, distributed recommendation system-based learning schemes in different scenarios. We experiment on the classical Deep Learning Recommendation Model (DLRM), which is a neural network-based machine learning model designed for personalization and recommendation, and the results show that our implementation introduces approximately no loss in model performance. The training speed is within acceptable limits.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源