论文标题

随机约束DRO,复杂性与样本量无关

Stochastic Constrained DRO with a Complexity Independent of Sample Size

论文作者

Qi, Qi, Lyu, Jiameng, Chan, Kung sik, Bai, Er Wei, Yang, Tianbao

论文摘要

近年来,分布强劲的优化(DRO)是一种训练鲁棒模型的流行方法,可训练强大的模型,以防止训练和测试集之间的分配变化,近年来受到了极大的关注。在本文中,我们提出和分析适用于非凸和凸丢失的随机算法,以解决kullback leibler Divergence限制DRO问题。与解决此问题的现有方法相比,我们的随机算法不仅享有竞争性的,如果不是更复杂的复杂性,而且只需要在每种迭代时都需要持续的批量尺寸,这对于广泛的应用程序更为实用。我们建立了一个几乎最佳的复杂性,以寻找用于非凸损失的$ε$固定解决方案,并为找到$ε$最佳解决方案的最佳复杂性。实证研究证明了提出的算法解决非凸和凸的限制DRO问题的有效性。

Distributionally Robust Optimization (DRO), as a popular method to train robust models against distribution shift between training and test sets, has received tremendous attention in recent years. In this paper, we propose and analyze stochastic algorithms that apply to both non-convex and convex losses for solving Kullback Leibler divergence constrained DRO problem. Compared with existing methods solving this problem, our stochastic algorithms not only enjoy competitive if not better complexity independent of sample size but also just require a constant batch size at every iteration, which is more practical for broad applications. We establish a nearly optimal complexity bound for finding an $ε$ stationary solution for non-convex losses and an optimal complexity for finding an $ε$ optimal solution for convex losses. Empirical studies demonstrate the effectiveness of the proposed algorithms for solving non-convex and convex constrained DRO problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源