论文标题
线性L2促进算法在消失的学习率中的行为
Behavior of linear L2-boosting algorithms in the vanishing learning rate asymptotic
论文作者
论文摘要
当学习率收敛到零时,迭代率且迭代次数相应重新缩放时,我们研究了梯度增强算法的渐近行为。我们主要考虑使用B {ü} Hlmann和Yu(2003)中研究的线性基础学习者进行回归的L2增强,并分析了该模型的随机版本,其中每个步骤都使用了亚采样(Friedman 2002)。我们证明了消失的学习率渐近级的确定性极限,并将极限表征为无限尺寸函数空间中线性微分方程的唯一解。此外,彻底分析了限制程序的训练和测试错误。我们最终在一个简单的数值实验上说明并讨论了我们的结果,在一个简单的数值实验中,线性L2促进操作员被解释为平滑投影,时间与其自由度的数量有关。
We investigate the asymptotic behaviour of gradient boosting algorithms when the learning rate converges to zero and the number of iterations is rescaled accordingly. We mostly consider L2-boosting for regression with linear base learner as studied in B{ü}hlmann and Yu (2003) and analyze also a stochastic version of the model where subsampling is used at each step (Friedman 2002). We prove a deterministic limit in the vanishing learning rate asymptotic and characterize the limit as the unique solution of a linear differential equation in an infinite dimensional function space. Besides, the training and test error of the limiting procedure are thoroughly analyzed. We finally illustrate and discuss our result on a simple numerical experiment where the linear L2-boosting operator is interpreted as a smoothed projection and time is related to its number of degrees of freedom.