论文标题

为外包神经网络推断提供实用的隐私解决方案

Towards Practical Privacy-Preserving Solution for Outsourced Neural Network Inference

论文作者

Liu, Pinglan, Zhang, Wensheng

论文摘要

当神经网络模型和数据被外包给云服务器以进行推理时,希望将模型和数据的机密性保留为相关方(即云服务器,提供客户端提供客户端和数据的模型)可能不会相互信任。根据多方计算,可信赖的执行环境(TEE)和平整或完全同型加密(LHE/FHE)提出了解决方案,但是它们的限制阻碍了实际应用。我们提出了一个基于LHE和TEE协同集成的新框架,该框架可以在相互信任的三方之间进行协作,同时最大程度地减少了(相对)资源受限的TEE的参与,并允许完全利用不可信的服务器,但更多的资源富含服务器的服务器。我们还提出了一个基于LHE的推理方案,作为框架的重要性能决定组成部分。我们在适度的平台上实施/评估了所提出的系统,并表明,与基于LHE的最先进的解决方案相比,我们提出的计划更适合/可扩展到各种设置,并且具有更好的性能。

When neural network model and data are outsourced to cloud server for inference, it is desired to preserve the confidentiality of model and data as the involved parties (i.e., cloud server, model providing client and data providing client) may not trust mutually. Solutions were proposed based on multi-party computation, trusted execution environment (TEE) and leveled or fully homomorphic encryption (LHE/FHE), but their limitations hamper practical application. We propose a new framework based on synergistic integration of LHE and TEE, which enables collaboration among mutually-untrusted three parties, while minimizing the involvement of (relatively) resource-constrained TEE and allowing the full utilization of the untrusted but more resource-rich part of server. We also propose a generic and efficient LHE-based inference scheme as an important performance-determining component of the framework. We implemented/evaluated the proposed system on a moderate platform and show that, our proposed scheme is more applicable/scalable to various settings, and has better performance, compared to the state-of-the-art LHE-based solutions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源