论文标题
关于机器学习模型中事后解释来扩大安全性和隐私风险
On the amplification of security and privacy risks by post-hoc explanations in machine learning models
论文作者
论文摘要
近年来,已经提出了各种解释方法,以帮助用户了解神经网络返回的结果,这些结果是复杂且不透明的黑盒子。但是,解释引起了潜在的侧道通道,这可以由对对手进行安装攻击的对手可以利用。特别是,事后解释方法根据输入维度根据其重要性或与结果相关性突出显示,也泄露了削弱安全性和隐私性的信息。在这项工作中,我们对各种流行的解释技术产生的隐私风险和安全风险进行了第一个系统表征。首先,我们提出了新颖的解释引导的黑盒逃避攻击,导致查询数量相同的成功率降低了10倍。我们表明,可以将解释的对抗优势量化为估计梯度的总方差的降低。其次,我们通过常见解释泄露的会员信息。与先前研究中的观察相反,通过我们的修改攻击,我们显示了会员信息的显着泄漏(比先前的结果提高了100%),即使在更严格的黑箱设置中也是如此。最后,我们研究了解释引导的模型提取攻击,并通过查询数量大幅度降低来证明对抗性增长。
A variety of explanation methods have been proposed in recent years to help users gain insights into the results returned by neural networks, which are otherwise complex and opaque black-boxes. However, explanations give rise to potential side-channels that can be leveraged by an adversary for mounting attacks on the system. In particular, post-hoc explanation methods that highlight input dimensions according to their importance or relevance to the result also leak information that weakens security and privacy. In this work, we perform the first systematic characterization of the privacy and security risks arising from various popular explanation techniques. First, we propose novel explanation-guided black-box evasion attacks that lead to 10 times reduction in query count for the same success rate. We show that the adversarial advantage from explanations can be quantified as a reduction in the total variance of the estimated gradient. Second, we revisit the membership information leaked by common explanations. Contrary to observations in prior studies, via our modified attacks we show significant leakage of membership information (above 100% improvement over prior results), even in a much stricter black-box setting. Finally, we study explanation-guided model extraction attacks and demonstrate adversarial gains through a large reduction in query count.