论文标题
颠覆隐私的甘斯:隐藏在消毒图像中的秘密
Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images
论文作者
论文摘要
空前的数据收集和共享加剧了隐私问题,并引起了对隐私保护工具的兴趣,这些工具从图像中删除敏感属性的同时为其他任务保留有用的信息。当前,最先进的方法使用隐私保护生成对抗网络(PP-GANS)为此目的,例如在不泄漏用户身份的情况下启用可靠的面部表达识别。但是,PP-GAN不提供隐私的正式证明,而是依靠实验来测量有关深度学习敏感属性(DL)基于歧视者的敏感属性的准确性的信息泄漏。在这项工作中,我们通过颠覆了现有的保护隐私的gan来质疑这种检查的严格性,以识别面部表达。我们表明,可以将敏感的标识数据隐藏在此类PP-GAN的消毒输出图像中以进行以后提取,甚至可以重建整个输入图像,同时满足隐私检查。我们通过基于PP-GAN的体系结构来演示我们的方法,并使用两个公共数据集提供定性和定量评估。我们的实验结果提出了有关对PP-GAN进行更严格隐私检查的需求的基本问题,我们为这些社会影响提供了见解。
Unprecedented data collection and sharing have exacerbated privacy concerns and led to increasing interest in privacy-preserving tools that remove sensitive attributes from images while maintaining useful information for other tasks. Currently, state-of-the-art approaches use privacy-preserving generative adversarial networks (PP-GANs) for this purpose, for instance, to enable reliable facial expression recognition without leaking users' identity. However, PP-GANs do not offer formal proofs of privacy and instead rely on experimentally measuring information leakage using classification accuracy on the sensitive attributes of deep learning (DL)-based discriminators. In this work, we question the rigor of such checks by subverting existing privacy-preserving GANs for facial expression recognition. We show that it is possible to hide the sensitive identification data in the sanitized output images of such PP-GANs for later extraction, which can even allow for reconstruction of the entire input images, while satisfying privacy checks. We demonstrate our approach via a PP-GAN-based architecture and provide qualitative and quantitative evaluations using two public datasets. Our experimental results raise fundamental questions about the need for more rigorous privacy checks of PP-GANs, and we provide insights into the social impact of these.