论文标题
关于算法追索的隐私风险
On the Privacy Risks of Algorithmic Recourse
论文作者
论文摘要
随着预测模型越来越多地用于做出结果决定,越来越强调开发可以为受影响的个体提供算法求助的技术。尽管此类回复对受影响的个体可能非常有益,但潜在的对手也可以利用这些回报来损害隐私。在这项工作中,我们首次尝试调查对手是否以及如何利用recourses推断有关基础模型的培训数据的私人信息。为此,我们提出了一系列新型的成员推理攻击,以利用算法追索。更具体地说,我们通过利用数据实例及其相应的反事实输出之间的距离,将有关成员推理攻击的先前文献扩展到求助设置。对现实世界和合成数据集进行的广泛实验表明,通过回流泄漏了大量的隐私泄漏。我们的工作确立了意想不到的隐私泄漏,这是广泛采用追索方法的重要风险。
As predictive models are increasingly being employed to make consequential decisions, there is a growing emphasis on developing techniques that can provide algorithmic recourse to affected individuals. While such recourses can be immensely beneficial to affected individuals, potential adversaries could also exploit these recourses to compromise privacy. In this work, we make the first attempt at investigating if and how an adversary can leverage recourses to infer private information about the underlying model's training data. To this end, we propose a series of novel membership inference attacks which leverage algorithmic recourse. More specifically, we extend the prior literature on membership inference attacks to the recourse setting by leveraging the distances between data instances and their corresponding counterfactuals output by state-of-the-art recourse methods. Extensive experimentation with real world and synthetic datasets demonstrates significant privacy leakage through recourses. Our work establishes unintended privacy leakage as an important risk in the widespread adoption of recourse methods.