论文标题
通过对抗影响功能来解释强大的优化
Interpreting Robust Optimization via Adversarial Influence Functions
论文作者
论文摘要
如今,在数据科学中广泛使用了强大的优化,尤其是在对抗训练中。但是,很少进行研究来量化强大优化如何改变优化器和与标准培训相比的预测损失。在本文中,受到强大统计的影响函数的启发,我们引入了对抗性影响函数(AIF),作为研究可靠优化产生的解决方案的工具。拟议的AIF享有封闭形式,并且可以有效地计算。为了说明AIF的使用情况,我们将其应用于研究模型敏感性 - 定义的数量是为了捕获实施强大优化后自然数据的预测损失的变化。我们使用AIF来分析模型的复杂性和随机平滑性如何影响模型敏感性相对于特定模型。我们进一步推导了核回归的AIF,并在神经切线核中有特殊应用,并在实验上证明了所提出的AIF的有效性。最后,AIF的理论将扩展到分布鲁棒优化。
Robust optimization has been widely used in nowadays data science, especially in adversarial training. However, little research has been done to quantify how robust optimization changes the optimizers and the prediction losses comparing to standard training. In this paper, inspired by the influence function in robust statistics, we introduce the Adversarial Influence Function (AIF) as a tool to investigate the solution produced by robust optimization. The proposed AIF enjoys a closed-form and can be calculated efficiently. To illustrate the usage of AIF, we apply it to study model sensitivity -- a quantity defined to capture the change of prediction losses on the natural data after implementing robust optimization. We use AIF to analyze how model complexity and randomized smoothing affect the model sensitivity with respect to specific models. We further derive AIF for kernel regressions, with a particular application to neural tangent kernels, and experimentally demonstrate the effectiveness of the proposed AIF. Lastly, the theories of AIF will be extended to distributional robust optimization.