论文标题

洗牌高斯机制的差异隐私机制

Shuffle Gaussian Mechanism for Differential Privacy

论文作者

Liew, Seng Pei, Takahashi, Tsubasa

论文摘要

我们在差异隐私的洗牌模型(DP)中研究高斯机制。特别是,我们表征了机制的rényi差异隐私(RDP),表明它的形式是:$ε(λ)\ leq \ frac {1} {λ-1} {λ-1} \ log \ left(\ frac {e^e^{ - λ/2σ^2}}} n^n^λ} {n^n^λ} {n n^λ} {n n^λ} {n^n^λ} {n^n^λ} {n^n^λ} {n log \ sum _ {\ setack {k_1+\ dotsc+k_n =λ; RDP严格由高斯RDP限制,而不会改组。混乱的高斯RDP在组成多种DP机制方面是有利的,在该机制中,我们证明了其对散装模型的隐私保证的最新近似DP组成定理的改进。此外,我们将研究扩展到了次采样的洗牌机制和最近提出的洗牌机制,这些机制是针对分布式/联合学习的协议。最后,对这些机制进行了一项实证研究,以证明在分布式学习框架下采用洗牌高斯机制来保证严格的用户隐私的功效。

We study Gaussian mechanism in the shuffle model of differential privacy (DP). Particularly, we characterize the mechanism's Rényi differential privacy (RDP), showing that it is of the form: $$ ε(λ) \leq \frac{1}{λ-1}\log\left(\frac{e^{-λ/2σ^2}}{n^λ} \sum_{\substack{k_1+\dotsc+k_n = λ; \\k_1,\dotsc,k_n\geq 0}}\binomλ{k_1,\dotsc,k_n}e^{\sum_{i=1}^nk_i^2/2σ^2}\right) $$ We further prove that the RDP is strictly upper-bounded by the Gaussian RDP without shuffling. The shuffle Gaussian RDP is advantageous in composing multiple DP mechanisms, where we demonstrate its improvement over the state-of-the-art approximate DP composition theorems in privacy guarantees of the shuffle model. Moreover, we extend our study to the subsampled shuffle mechanism and the recently proposed shuffled check-in mechanism, which are protocols geared towards distributed/federated learning. Finally, an empirical study of these mechanisms is given to demonstrate the efficacy of employing shuffle Gaussian mechanism under the distributed learning framework to guarantee rigorous user privacy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源