论文标题

通过Broyden $'$ s方法的EM和MM算法的准Newton加速

Quasi-Newton acceleration of EM and MM algorithms via Broyden$'$s method

论文作者

Agarwal, Medha, Xu, Jason

论文摘要

大型化最小化原则(MM)为引发有效算法解决优化问题提供了一个一般框架。但是,它们通常会遭受缓慢的收敛性,尤其是在大规模和高维数据设置中。这引起了人们对专门为MM算法设计的加速度方案的关注,但是许多现有的设计是特定于问题的,要么依赖于近似值和启发式方法,并受到优化文献的启发。我们提出了一种新颖,严格的准牛顿方法,用于加速任何有效的MM算法,以寻求MM \ textit {算法映射}的固定点。该方法不需要从目标函数或其梯度中的特定信息或计算,并且可以在高维设置中享受有限的内存变体,以适应有效的计算。通过将我们的方法连接到Broyden的经典扎根方法,我们可以建立收敛保证并确定线性和超级线性收敛的条件。在一项彻底的经验研究中,这些结果是通过数值验证的,并将其与同行方法进行了比较,表明它在各种问题范围内实现了最先进的性能。

The principle of majorization-minimization (MM) provides a general framework for eliciting effective algorithms to solve optimization problems. However, they often suffer from slow convergence, especially in large-scale and high-dimensional data settings. This has drawn attention to acceleration schemes designed exclusively for MM algorithms, but many existing designs are either problem-specific or rely on approximations and heuristics loosely inspired by the optimization literature. We propose a novel, rigorous quasi-Newton method for accelerating any valid MM algorithm, cast as seeking a fixed point of the MM \textit{algorithm map}. The method does not require specific information or computation from the objective function or its gradient and enjoys a limited-memory variant amenable to efficient computation in high-dimensional settings. By connecting our approach to Broyden's classical root-finding methods, we establish convergence guarantees and identify conditions for linear and super-linear convergence. These results are validated numerically and compared to peer methods in a thorough empirical study, showing that it achieves state-of-the-art performance across a diverse range of problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源