论文标题
防御对抗欺骗ASV对策的对抗性攻击
Defense against adversarial attacks on spoofing countermeasures of ASV
论文作者
论文摘要
在ASVSPOOF 2019挑战中提出了用于自动扬声器验证(ASV)的各种前沿对策方法(ASV)。但是,以前的工作表明,对策模型容易受到对抗性示例,与自然数据无法区分。一个良好的对策模型不仅应与欺骗音频(包括合成,转换和重播音频)具有鲁棒性;但是反对恶意对手故意产生的例子。在这项工作中,我们引入了一种被动防御方法,空间平滑和主动防御方法,对抗性训练,以减轻ASV欺骗对抗示例的对抗模型的脆弱性。本文是最早使用防御方法来改善对抗攻击下ASV欺骗对策模型的鲁棒性的之一。实验结果表明,这两种防御方法积极地有助于欺骗对策模型对抗示例。
Various forefront countermeasure methods for automatic speaker verification (ASV) with considerable performance in anti-spoofing are proposed in the ASVspoof 2019 challenge. However, previous work has shown that countermeasure models are vulnerable to adversarial examples indistinguishable from natural data. A good countermeasure model should not only be robust against spoofing audio, including synthetic, converted, and replayed audios; but counteract deliberately generated examples by malicious adversaries. In this work, we introduce a passive defense method, spatial smoothing, and a proactive defense method, adversarial training, to mitigate the vulnerability of ASV spoofing countermeasure models against adversarial examples. This paper is among the first to use defense methods to improve the robustness of ASV spoofing countermeasure models under adversarial attacks. The experimental results show that these two defense methods positively help spoofing countermeasure models counter adversarial examples.