论文标题

强大的经验风险最小化和容忍度

Robust Empirical Risk Minimization with Tolerance

论文作者

Bhattacharjee, Robi, Hopkins, Max, Kumar, Akash, Yu, Hantao, Chaudhuri, Kamalika

论文摘要

在当今以技术为主导的世界中,开发简单,有效的学习算法是一个紧迫的问题,当前需要指数样本复杂性和复杂不当学习规则的当前理论技术远离满足需求。在这项工作中,我们研究了(鲁棒)$ \ textit {经验风险最小化} $(RERM)的基本范例,这是一个简单的过程,其中学习者输出任何假设最小化训练错误的假设。 Rerm著名地未能鲁and学习VC课程(Montasser等人,2019a),我们显示的界限甚至扩展到“尼斯”设置,例如(有限的)半空间。因此,我们研究了一种称为$ \ textit {polerant} $ robust学习(Ashtiani等,2022)的稳健模型的最新放松,其中将输出分类器与稍大的扰动集中的最佳可达到的误差进行了比较。我们表明,在几何条件下,RERM的自然耐受变体确实足以满足$γ$ - 耐耐受性的稳健学习VC类,$ \ mathbb {r}^d $,仅需要$ \ tilde {o} {o} \ left(\ frac {\ frac {vc(vc(h) \ frac {d} {γδ}}} {ε^2} \ right)$样品,用于(最大)直径$ d $的稳健性区域。

Developing simple, sample-efficient learning algorithms for robust classification is a pressing issue in today's tech-dominated world, and current theoretical techniques requiring exponential sample complexity and complicated improper learning rules fall far from answering the need. In this work we study the fundamental paradigm of (robust) $\textit{empirical risk minimization}$ (RERM), a simple process in which the learner outputs any hypothesis minimizing its training error. RERM famously fails to robustly learn VC classes (Montasser et al., 2019a), a bound we show extends even to `nice' settings such as (bounded) halfspaces. As such, we study a recent relaxation of the robust model called $\textit{tolerant}$ robust learning (Ashtiani et al., 2022) where the output classifier is compared to the best achievable error over slightly larger perturbation sets. We show that under geometric niceness conditions, a natural tolerant variant of RERM is indeed sufficient for $γ$-tolerant robust learning VC classes over $\mathbb{R}^d$, and requires only $\tilde{O}\left( \frac{VC(H)d\log \frac{D}{γδ}}{ε^2}\right)$ samples for robustness regions of (maximum) diameter $D$.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源