论文标题
The-X:具有同型加密的隐私变压器推断
THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption
论文作者
论文摘要
随着越来越多的预训练的语言模型采用云部署,隐私问题迅速增长,主要是为了暴露普通文本用户数据(例如,搜索历史记录,病历,银行帐户)。变压器模型的隐私推论是云服务用户的需求。为了保护隐私,仅在同构加密中使用密文来计算(HE)是一个有吸引力的选择。但是,由于变压器块中的复杂计算,因此很难对密文数据进行预训练的模型推断,而当前HE工具尚未支持。在这项工作中,我们介绍了$ \ textit {the-x} $,这是变压器的近似方法,该方法可以对流行框架开发的预训练模型的隐私保护推断。 $ \ textIt {the-x} $提出了一个工作流程,以处理变压器网络中的复杂计算,包括所有非级别函数,例如gelu,softmax和layernorm。实验揭示了我们提出的$ \ textit {the-x} $可以为不同的下游任务上的加密数据推理变压器推断,所有这些都可以忽略不计,但享受了理论保证的隐私保护优势。
As more and more pre-trained language models adopt on-cloud deployment, the privacy issues grow quickly, mainly for the exposure of plain-text user data (e.g., search history, medical record, bank account). Privacy-preserving inference of transformer models is on the demand of cloud service users. To protect privacy, it is an attractive choice to compute only with ciphertext in homomorphic encryption (HE). However, enabling pre-trained models inference on ciphertext data is difficult due to the complex computations in transformer blocks, which are not supported by current HE tools yet. In this work, we introduce $\textit{THE-X}$, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models developed by popular frameworks. $\textit{THE-X}$ proposes a workflow to deal with complex computation in transformer networks, including all the non-polynomial functions like GELU, softmax, and LayerNorm. Experiments reveal our proposed $\textit{THE-X}$ can enable transformer inference on encrypted data for different downstream tasks, all with negligible performance drop but enjoying the theory-guaranteed privacy-preserving advantage.