论文标题
对纹理识别的普遍对抗攻击的研究
A Study for Universal Adversarial Attacks on Texture Recognition
论文作者
论文摘要
鉴于卷积神经网络(CNN)在自然图像分类和对象识别问题上取得的杰出进展,这表明深度学习方法可以在许多纹理数据集上实现良好的识别性能。但是,尽管已揭示出用于自然图像分类/对象识别任务的CNN非常容易受到各种类型的对抗攻击方法的影响,但深度学习方法的鲁棒性尚待检查。在我们的论文中,我们表明存在小型图像无关/单膜扰动,可以在所有测试过的纹理数据集上使用超过80 \%的测试愚蠢率欺骗深度学习模型。使用在测试数据集上使用各种攻击方法的计算扰动通常是准侵蚀的,其中包含低,中和高频组件的结构化模式。
Given the outstanding progress that convolutional neural networks (CNNs) have made on natural image classification and object recognition problems, it is shown that deep learning methods can achieve very good recognition performance on many texture datasets. However, while CNNs for natural image classification/object recognition tasks have been revealed to be highly vulnerable to various types of adversarial attack methods, the robustness of deep learning methods for texture recognition is yet to be examined. In our paper, we show that there exist small image-agnostic/univesal perturbations that can fool the deep learning models with more than 80\% of testing fooling rates on all tested texture datasets. The computed perturbations using various attack methods on the tested datasets are generally quasi-imperceptible, containing structured patterns with low, middle and high frequency components.