论文标题

在黑框条件下使用进化多目标优化的语音识别中的无对抗性示例生成

Adjust-free adversarial example generation in speech recognition using evolutionary multi-objective optimization under black-box condition

论文作者

Ishida, Shoma, Ono, Satoshi

论文摘要

本文提出了一种对自动语音识别系统的黑盒对抗攻击方法。一些研究试图攻击神经网络以进行语音识别。但是,这些方法并未考虑产生的对抗性示例的鲁棒性,以防止定时滞后。本文提出的方法采用了进化多目标优化(EMO),该优化允许在黑盒方案下生成健壮的对抗示例。实验结果表明,该提出的方法成功地生成了无调整的对抗性示例,这些示例足以与正时滞后相对于时间延迟,因此攻击者无需将其与目标语音进行播放的时间。

This paper proposes a black-box adversarial attack method to automatic speech recognition systems. Some studies have attempted to attack neural networks for speech recognition; however, these methods did not consider the robustness of generated adversarial examples against timing lag with a target speech. The proposed method in this paper adopts Evolutionary Multi-objective Optimization (EMO)that allows it generating robust adversarial examples under black-box scenario. Experimental results showed that the proposed method successfully generated adjust-free adversarial examples, which are sufficiently robust against timing lag so that an attacker does not need to take the timing of playing it against the target speech.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源