论文标题
通过参数分布在概括性优化框架中概括概括的复杂性
Hedging Complexity in Generalization via a Parametric Distributionally Robust Optimization Framework
论文作者
论文摘要
经验风险最小化(ERM)和分配强大的优化(DRO)是解决操作管理和机器学习中出现的随机优化问题的流行方法。这些方法的现有概括误差范围取决于成本函数的复杂性或随机扰动的维度。因此,对于具有复杂目标功能的高维问题,这些方法的性能可能很差。我们提出了一种简单的方法,其中使用参数分布族近似随机扰动的分布。这减轻了两个复杂性的来源。但是,它引入了模型错误指定错误。我们表明,这种新的错误来源可以通过合适的DRO公式控制。我们提出的参数DRO方法已显着改善了与现有的ERM和DRO方法以及各种环境的参数ERM的概括界限。我们的方法在分配变化下特别有效,并且在上下文优化中广泛工作。我们还说明了方法在合成和真实数据组合优化和回归任务上的出色性能。
Empirical risk minimization (ERM) and distributionally robust optimization (DRO) are popular approaches for solving stochastic optimization problems that appear in operations management and machine learning. Existing generalization error bounds for these methods depend on either the complexity of the cost function or dimension of the random perturbations. Consequently, the performance of these methods can be poor for high-dimensional problems with complex objective functions. We propose a simple approach in which the distribution of random perturbations is approximated using a parametric family of distributions. This mitigates both sources of complexity; however, it introduces a model misspecification error. We show that this new source of error can be controlled by suitable DRO formulations. Our proposed parametric DRO approach has significantly improved generalization bounds over existing ERM and DRO methods and parametric ERM for a wide variety of settings. Our method is particularly effective under distribution shifts and works broadly in contextual optimization. We also illustrate the superior performance of our approach on both synthetic and real-data portfolio optimization and regression tasks.