论文标题
解释多个实体学习的漏洞对对抗性扰动
Interpreting Vulnerabilities of Multi-Instance Learning to Adversarial Perturbations
论文作者
论文摘要
多实施学习(MIL)是一种最近的机器学习范式,在各种现实生活中非常有用,例如图像分析,视频异常检测,文本分类等。众所周知,大多数现有的机器学习分类器非常容易受到对抗性扰动的影响。由于MIL是一种弱监督的学习,其中可用于一组实例的信息,而不是每个实例,因此对抗性扰动可能是致命的。在本文中,我们提出了两种对抗性扰动方法,以分析对抗性扰动的影响以解释MIL方法的脆弱性。在两种算法中,可以为每个袋子定制一个,而另一个是通用的算法,它可以影响给定数据集中的所有袋子,因此具有一定的通用性。通过模拟,我们还展示了提出的算法欺骗最先进(SOTA)MIL方法的有效性。最后,我们通过实验讨论了有关通过简单策略来照顾这种对抗性扰动的。源代码可在https://github.com/inkiinki/mi-uap上找到。
Multi-Instance Learning (MIL) is a recent machine learning paradigm which is immensely useful in various real-life applications, like image analysis, video anomaly detection, text classification, etc. It is well known that most of the existing machine learning classifiers are highly vulnerable to adversarial perturbations. Since MIL is a weakly supervised learning, where information is available for a set of instances, called bag and not for every instances, adversarial perturbations can be fatal. In this paper, we have proposed two adversarial perturbation methods to analyze the effect of adversarial perturbations to interpret the vulnerability of MIL methods. Out of the two algorithms, one can be customized for every bag, and the other is a universal one, which can affect all bags in a given data set and thus has some generalizability. Through simulations, we have also shown the effectiveness of the proposed algorithms to fool the state-of-the-art (SOTA) MIL methods. Finally, we have discussed through experiments, about taking care of these kind of adversarial perturbations through a simple strategy. Source codes are available at https://github.com/InkiInki/MI-UAP.