论文标题
通过铰链损失对角线下降进行分类的迭代正则化
Iterative regularization in classification via hinge loss diagonal descent
论文作者
论文摘要
迭代正则化是正规化理论中的一个经典想法,最近在机器学习中变得很流行。一方面,它允许设计有效控制的算法,同时控制数值和统计精度。另一方面,它可以阐明在训练神经网络时观察到的学习曲线。在本文中,我们关注分类背景下的迭代正则化。将此设置与线性反问题的设置进行对比之后,我们根据使用铰链损耗函数开发了一种迭代正则化方法。更确切地说,我们考虑了一种算法系列的对角线方法,我们证明了收敛性以及合适分类噪声模型的收敛速率和稳定性结果。如数值模拟所证实的那样,我们的方法与其他替代方案进行了比较。
Iterative regularization is a classic idea in regularization theory, that has recently become popular in machine learning. On the one hand, it allows to design efficient algorithms controlling at the same time numerical and statistical accuracy. On the other hand it allows to shed light on the learning curves observed while training neural networks. In this paper, we focus on iterative regularization in the context of classification. After contrasting this setting with that of linear inverse problems, we develop an iterative regularization approach based on the use of the hinge loss function. More precisely we consider a diagonal approach for a family of algorithms for which we prove convergence as well as rates of convergence and stability results for a suitable classification noise model. Our approach compares favorably with other alternatives, as confirmed by numerical simulations.