论文标题

两个隐藏的神经网络的无噪声学习的硬度

Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks

论文作者

Chen, Sitan, Gollakota, Aravind, Klivans, Adam R., Meka, Raghu

论文摘要

我们为在标准(无噪声)模型中就高斯输入学习两个隐藏层的relu网络提供了超多种统计查询(SQ)下限。在这种情况下,没有任何深度的Relu网络闻名一般的SQ下限:以前的SQ下限仅适用于对抗噪声模型(不可知的学习)或限制模型,例如相关SQ。 先前的工作暗示了我们的结果:Vempala和Wilmes表明,一般的SQ下限不能适用于满足简单非差异条件的任何实现的功能家族。 为了避免结果,我们由于丹尼利(Daniely)而完善了提升程序,而瓦尔迪(Vardi)则将布尔PAC学习问题减少到高斯问题上。我们展示了如何将其技术扩展到其他学习模型,并且在许多经过良好研究的情况下,都会获得更有效的降低。因此,我们还证明了PAC学习两个隐藏层的Relu网络以及从标签查询中学习恒定深度relu网络的新的下界的新加密硬度结果。

We give superpolynomial statistical query (SQ) lower bounds for learning two-hidden-layer ReLU networks with respect to Gaussian inputs in the standard (noise-free) model. No general SQ lower bounds were known for learning ReLU networks of any depth in this setting: previous SQ lower bounds held only for adversarial noise models (agnostic learning) or restricted models such as correlational SQ. Prior work hinted at the impossibility of our result: Vempala and Wilmes showed that general SQ lower bounds cannot apply to any real-valued family of functions that satisfies a simple non-degeneracy condition. To circumvent their result, we refine a lifting procedure due to Daniely and Vardi that reduces Boolean PAC learning problems to Gaussian ones. We show how to extend their technique to other learning models and, in many well-studied cases, obtain a more efficient reduction. As such, we also prove new cryptographic hardness results for PAC learning two-hidden-layer ReLU networks, as well as new lower bounds for learning constant-depth ReLU networks from label queries.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源