论文标题

限制优化的万一

Omnipredictors for Constrained Optimization

论文作者

Hu, Lunjia, Livni-Navon, Inbal, Reingold, Omer, Yang, Chutong

论文摘要

全诉主义者(Gopalan,Kalai,Reingold,Sharan和Wieder ITCS 2021)的概念提出了新的损失最小化范式。与在$ \ Mathcal c $类中的假设丧失相比,无需基于已知损失函数学习预测因子,而是可以轻松地进行后处理以最大程度地减少任何丰富的损失功能家族。已经表明,这种杂手已经存在并暗示(对于所有凸和Lipschitz损失函数),这是通过算法公平文献的多核算概念的概念。在本文中,我们介绍了综合器,以进行限制优化并研究其复杂性和含义。我们介绍的概念使学习者不知道后来将分配的损失函数以及后来将施加的约束,只要已知用于定义这些约束的亚群。我们展示了如何依赖于适当的多核变体来获取有限的优化问题的杂词。当所使用的约束是所谓的群体公平概念时,我们还研究了该概念的含义。

The notion of omnipredictors (Gopalan, Kalai, Reingold, Sharan and Wieder ITCS 2021), suggested a new paradigm for loss minimization. Rather than learning a predictor based on a known loss function, omnipredictors can easily be post-processed to minimize any one of a rich family of loss functions compared with the loss of hypotheses in a class $\mathcal C$. It has been shown that such omnipredictors exist and are implied (for all convex and Lipschitz loss functions) by the notion of multicalibration from the algorithmic fairness literature. In this paper, we introduce omnipredictors for constrained optimization and study their complexity and implications. The notion that we introduce allows the learner to be unaware of the loss function that will be later assigned as well as the constraints that will be later imposed, as long as the subpopulations that are used to define these constraints are known. We show how to obtain omnipredictors for constrained optimization problems, relying on appropriate variants of multicalibration. We also investigate the implications of this notion when the constraints used are so-called group fairness notions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源