论文标题
DRSOM:尺寸降低的二阶方法
DRSOM: A Dimension Reduced Second-Order Method
论文作者
论文摘要
在本文中,我们提出了一种降低的二阶方法(DRSOM),用于凸和非凸(不受约束)优化。在类似信任区域的框架下,我们的方法保留了二阶方法的收敛性,同时仅在几个方向上使用曲率信息。因此,我们方法的计算开销仍然与一阶的梯度下降方法相媲美。从理论上讲,我们表明该方法具有局部二次收敛性,如果子空间满足通常采用的近似近似Hessian假设,则该方法的$ O(ε^{ - 3/2})$为$ O(ε^{ - 3/2})$。我们进一步表明,如果我们在算法的末期使用类似Krylov的方法执行校正步骤,则可以删除此假设。 DRSOM的适用性和性能由各种计算实验(包括$ L_2 -L_P $最小化,最可爱的问题和传感器网络定位)展示。
In this paper, we propose a Dimension-Reduced Second-Order Method (DRSOM) for convex and nonconvex (unconstrained) optimization. Under a trust-region-like framework, our method preserves the convergence of the second-order method while using only curvature information in a few directions. Consequently, the computational overhead of our method remains comparable to the first-order such as the gradient descent method. Theoretically, we show that the method has a local quadratic convergence and a global convergence rate of $O(ε^{-3/2})$ to satisfy the first-order and second-order conditions if the subspace satisfies a commonly adopted approximated Hessian assumption. We further show that this assumption can be removed if we perform a corrector step using a Krylov-like method periodically at the end stage of the algorithm. The applicability and performance of DRSOM are exhibited by various computational experiments, including $L_2 - L_p$ minimization, CUTEst problems, and sensor network localization.