论文标题
在分类环境中,深神经网络的最佳收敛速率
Optimal Convergence Rates of Deep Neural Networks in a Classification Setting
论文作者
论文摘要
我们将最佳的收敛速率建立到在分类环境中的一类深神经网络的对数因子的最佳收敛速率,有时也称为Tsybakov噪声条件。我们在一般环境中构造分类器,在该环境中,贝叶斯规则的边界可以通过神经网络良好近似。相应的收敛速率相对于错误分类误差证明。然后表明,如果边界满足平滑度条件,则这些速率在最小值方面是最佳的。这种设置已经存在非最佳收敛率。我们的主要贡献在于提高现有价格并表现出最佳性,这是一个开放的问题。此外,我们在一些额外的限制下显示了几乎最佳的速率,这些限制规避了维度的诅咒。对于我们的分析,我们需要一种条件,该条件可以对所使用的限制因素提供新的见解。从某种意义上说,它是对一类功能的“正确噪声指数”的要求。
We establish optimal convergence rates up to a log-factor for a class of deep neural networks in a classification setting under a restraint sometimes referred to as the Tsybakov noise condition. We construct classifiers in a general setting where the boundary of the bayes-rule can be approximated well by neural networks. Corresponding rates of convergence are proven with respect to the misclassification error. It is then shown that these rates are optimal in the minimax sense if the boundary satisfies a smoothness condition. Non-optimal convergence rates already exist for this setting. Our main contribution lies in improving existing rates and showing optimality, which was an open problem. Furthermore, we show almost optimal rates under some additional restraints which circumvent the curse of dimensionality. For our analysis we require a condition which gives new insight on the restraint used. In a sense it acts as a requirement for the "correct noise exponent" for a class of functions.