论文标题
将张量分解施加到图像以抗对抗攻击的鲁棒性
Applying Tensor Decomposition to image for Robustness against Adversarial Attack
论文作者
论文摘要
如今,深度学习技术的增长速度更快,并且在计算机视觉领域表现出巨大的性能。但是,事实证明,基于深度学习的模型非常容易受到一些小型扰动,称为对抗性攻击。它可以通过添加小型扰动来轻松欺骗深度学习模型。另一方面,张量分解方法广泛用于压缩张量数据,包括数据矩阵,图像等。在本文中,我们建议将张量分解组合以防御模型来防御对抗示例。我们验证这一想法是简单有效的,可以抵抗对抗性攻击。此外,此方法很少会降低干净数据的原始性能。我们对MNIST,CIFAR10和Imagenet数据进行了实验,并显示了我们对最先进攻击方法的强大方法。
Nowadays the deep learning technology is growing faster and shows dramatic performance in computer vision areas. However, it turns out a deep learning based model is highly vulnerable to some small perturbation called an adversarial attack. It can easily fool the deep learning model by adding small perturbations. On the other hand, tensor decomposition method widely uses for compressing the tensor data, including data matrix, image, etc. In this paper, we suggest combining tensor decomposition for defending the model against adversarial example. We verify this idea is simple and effective to resist adversarial attack. In addition, this method rarely degrades the original performance of clean data. We experiment on MNIST, CIFAR10 and ImageNet data and show our method robust on state-of-the-art attack methods.