论文标题

通过对手镜头的视听事件识别

Audio-Visual Event Recognition through the lens of Adversary

论文作者

Li, Juncheng B, Ma, Kaixin, Qu, Shuhui, Huang, Po-Yao, Metze, Florian

论文摘要

由于音频/视觉分类模型被广泛用于敏感任务(例如按大规模过滤),因此了解其鲁棒性并提高准确性至关重要。这项工作旨在通过对抗性噪声来研究与多模式学习有关的几个关键问题:1)早期/中/晚期融合影响其鲁棒性和准确性之间的权衡2)不同的频率/时间域特征如何有助于鲁棒性? 3)不同的神经模块如何有助于对抗噪声?在我们的实验中,我们构建了对抗性示例,以攻击在Google Audioset上训练的最先进的神经模型。我们比较了使用不同的$ l_p $ norms对尺寸$ε$的对抗性扰动的攻击效力,我们需要“停用”受害者模型。使用对抗噪声消融多模型,我们能够提供洞察力,以平衡模型参数/准确性和鲁棒性权衡的最佳潜在融合策略,并区分强大的功能与各种神经网络模型倾向于学习的非稳定功能。

As audio/visual classification models are widely deployed for sensitive tasks like content filtering at scale, it is critical to understand their robustness along with improving the accuracy. This work aims to study several key questions related to multimodal learning through the lens of adversarial noises: 1) The trade-off between early/middle/late fusion affecting its robustness and accuracy 2) How do different frequency/time domain features contribute to the robustness? 3) How do different neural modules contribute to the adversarial noise? In our experiment, we construct adversarial examples to attack state-of-the-art neural models trained on Google AudioSet. We compare how much attack potency in terms of adversarial perturbation of size $ε$ using different $L_p$ norms we would need to "deactivate" the victim model. Using adversarial noise to ablate multimodal models, we are able to provide insights into what is the best potential fusion strategy to balance the model parameters/accuracy and robustness trade-off and distinguish the robust features versus the non-robust features that various neural networks model tend to learn.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源