论文标题
本地查询何时有用可用于鲁棒学习?
When are Local Queries Useful for Robust Learning?
论文作者
论文摘要
在考虑Gourdeau等人的确切风险和对随机示例的访问时,已经证明分布假设对于概念类的可靠学习性是必不可少的。 (2019)。在本文中,我们研究了学习模型,其中通过使用局部查询为学习者提供了更多的权力,并提供了第一种无分配算法,这些算法对这种鲁棒性进行了强大的经验风险最小化(ERM)。我们考虑的第一个学习模型使用本地会员查询(LMQ),学习者可以在训练样本附近查询积分标签。我们表明,在统一的分布下,LMQ不会增加连词的稳健性阈值和任何超类,例如决策列表和半空间。面对这种负面结果,我们介绍了局部等价查询($ \ MATHSF {LEQ} $)甲骨文,该查询返回训练样本中一个点附近的扰动区域中是否在训练样本中的扰动区域中一致,以及是否存在的反例。我们显示一个分离结果:一方面,如果查询半径$λ$严格小于对手的扰动预算$ρ$,那么各种各样的概念类都是不可能的;另一方面,设置$λ=ρ$允许我们开发强大的ERM算法。然后,我们基于在线学习保证来限制这些算法的查询复杂性,并进一步提高这些连词的特殊情况。我们通过在$ \ {0,1 \}^n $上为半空间提供强大的学习算法,然后在$ \ mathbb {r}^n $中获得稳定性保证,以相对于精确构成的对手。
Distributional assumptions have been shown to be necessary for the robust learnability of concept classes when considering the exact-in-the-ball robust risk and access to random examples by Gourdeau et al. (2019). In this paper, we study learning models where the learner is given more power through the use of local queries, and give the first distribution-free algorithms that perform robust empirical risk minimization (ERM) for this notion of robustness. The first learning model we consider uses local membership queries (LMQ), where the learner can query the label of points near the training sample. We show that, under the uniform distribution, LMQs do not increase the robustness threshold of conjunctions and any superclass, e.g., decision lists and halfspaces. Faced with this negative result, we introduce the local equivalence query ($\mathsf{LEQ}$) oracle, which returns whether the hypothesis and target concept agree in the perturbation region around a point in the training sample, as well as a counterexample if it exists. We show a separation result: on the one hand, if the query radius $λ$ is strictly smaller than the adversary's perturbation budget $ρ$, then distribution-free robust learning is impossible for a wide variety of concept classes; on the other hand, the setting $λ=ρ$ allows us to develop robust ERM algorithms. We then bound the query complexity of these algorithms based on online learning guarantees and further improve these bounds for the special case of conjunctions. We finish by giving robust learning algorithms for halfspaces on $\{0,1\}^n$ and then obtaining robustness guarantees for halfspaces in $\mathbb{R}^n$ against precision-bounded adversaries.