论文标题

用于评估防御模型的关键检查站,以防止对抗性攻击和鲁棒

Critical Checkpoints for Evaluating Defence Models Against Adversarial Attack and Robustness

论文作者

Tekwani, Kanak, Parmar, Manojkumar

论文摘要

从过去的几年开始,研究人员提出了机器学习中对手的防御模型,这对于大多数现有条件下的现有攻击都可以辩护(他们对某些有限的输入或数据集进行了评估)。然后很快,另一组研究人员发现了该防御模型中的漏洞,并通过提出更强大的攻击模型来打破它。在过去的防御模型中发现了一些常见的缺陷,这些缺陷在很短的时间内被打破。防御模型如此容易被打破是一个关注点,因为在机器学习模型的帮助下,许多关键活动的决定是进行了决定。因此,完全需要一些防御检查站,任何研究人员都应在评估技术的合理性并宣布其为体面的防御技术时牢记。在本文中,我们建议在构建和评估国防模型的合理性时考虑几乎应该考虑的检查点。在观察为什么一些过去的防御模型失败的原因之后,建议将所有这些观点置于较强的攻击之后,并证明了它们的稳健性。

From past couple of years there is a cycle of researchers proposing a defence model for adversaries in machine learning which is arguably defensible to most of the existing attacks in restricted condition (they evaluate on some bounded inputs or datasets). And then shortly another set of researcher finding the vulnerabilities in that defence model and breaking it by proposing a stronger attack model. Some common flaws are been noticed in the past defence models that were broken in very short time. Defence models being broken so easily is a point of concern as decision of many crucial activities are taken with the help of machine learning models. So there is an utter need of some defence checkpoints that any researcher should keep in mind while evaluating the soundness of technique and declaring it to be decent defence technique. In this paper, we have suggested few checkpoints that should be taken into consideration while building and evaluating the soundness of defence models. All these points are recommended after observing why some past defence models failed and how some model remained adamant and proved their soundness against some of the very strong attacks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源