论文标题
AESPA:准确保留快速私有推理的低度多项式激活
AESPA: Accuracy Preserving Low-degree Polynomial Activation for Fast Private Inference
论文作者
论文摘要
混合私有推理(PI)方案协同利用多方计算(MPC)和同构加密,是PI最突出的技术之一。但是,即使是最先进的PI协议也被非线性层瓶颈,尤其是激活函数。尽管标准的非线性激活函数可以生成更高的模型精度,但必须通过昂贵的乱码MPC原始词来处理它。多项式激活可以通过Beaver的乘法三元组MPC原始元素进行处理,但到目前为止一直在产生严重的精度下降。 在本文中,我们提出了保留低级多项式激活函数(AESPA)的精度,该函数利用了relu的赫米特膨胀和基础归一化的膨胀。我们将AESPA应用于流行的ML模型,例如VGGNET,RESNET和PED-ACRIVATION RESNET,以显示出与具有RELU激活的标准模型相当的推理准确性,具有优于先前的低级多项式研究的精度。当在最先进的Delphi Pi协议上应用于全Relu基线时,AESPA显示到42.1倍和28.3倍的在线潜伏期和通信成本下降。
Hybrid private inference (PI) protocol, which synergistically utilizes both multi-party computation (MPC) and homomorphic encryption, is one of the most prominent techniques for PI. However, even the state-of-the-art PI protocols are bottlenecked by the non-linear layers, especially the activation functions. Although a standard non-linear activation function can generate higher model accuracy, it must be processed via a costly garbled-circuit MPC primitive. A polynomial activation can be processed via Beaver's multiplication triples MPC primitive but has been incurring severe accuracy drops so far. In this paper, we propose an accuracy preserving low-degree polynomial activation function (AESPA) that exploits the Hermite expansion of the ReLU and basis-wise normalization. We apply AESPA to popular ML models, such as VGGNet, ResNet, and pre-activation ResNet, to show an inference accuracy comparable to those of the standard models with ReLU activation, achieving superior accuracy over prior low-degree polynomial studies. When applied to the all-RELU baseline on the state-of-the-art Delphi PI protocol, AESPA shows up to 42.1x and 28.3x lower online latency and communication cost.