论文标题
通过分布鲁棒性与Wasserstein歧义通过分布鲁棒性分类
Adversarial Classification via Distributional Robustness with Wasserstein Ambiguity
论文作者
论文摘要
我们研究了基于分布强大的机会限制的对抗分类的模型。我们表明,在Wasserstein的歧义下,该模型旨在最大程度地减少错误分类距离的条件价值,我们探索了与较早提出的对抗性分类模型的链接,并探讨了提出的最大细纹分类器的链接。我们还为线性分类提供了分布鲁棒模型的重新重新制定,并表明它等同于最大程度地减少正规坡道损失目标。数值实验表明,尽管这种公式的不概念性,但标准下降方法似乎会收敛到该问题的全局最小化器。受到这一观察的启发,我们表明,对于某些类别的分布,正规化坡道损失最小化问题的唯一固定点是全球最小化器。
We study a model for adversarial classification based on distributionally robust chance constraints. We show that under Wasserstein ambiguity, the model aims to minimize the conditional value-at-risk of the distance to misclassification, and we explore links to adversarial classification models proposed earlier and to maximum-margin classifiers. We also provide a reformulation of the distributionally robust model for linear classification, and show it is equivalent to minimizing a regularized ramp loss objective. Numerical experiments show that, despite the nonconvexity of this formulation, standard descent methods appear to converge to the global minimizer for this problem. Inspired by this observation, we show that, for a certain class of distributions, the only stationary point of the regularized ramp loss minimization problem is the global minimizer.