论文标题

Pac-Bayesian对优化算法学习

PAC-Bayesian Learning of Optimization Algorithms

论文作者

Sucker, Michael, Ochs, Peter

论文摘要

我们将Pac-Bayes理论应用于学习至上的设置。据我们所知,我们提出了第一个以可证明的泛化保证(PAC结合)和高融合可能性和高收敛速度之间的明确权衡的框架,以学习优化算法。即使在保证收敛性的情况下,我们学到的优化算法证明了基于(确定性的)最坏情况分析的相关算法。我们的结果依赖于基于指数家庭的一般无限损失功能的Pac-Bayes界限。通过概括现有的思想,我们将学习程序重新制定为一维最小化的问题,并研究找到全球最低限度的可能性,从而实现了学习程序的算法实现。作为概念验证,我们学习了标准优化算法的超参数,以凭经验强调我们的理论。

We apply the PAC-Bayes theory to the setting of learning-to-optimize. To the best of our knowledge, we present the first framework to learn optimization algorithms with provable generalization guarantees (PAC-bounds) and explicit trade-off between a high probability of convergence and a high convergence speed. Even in the limit case, where convergence is guaranteed, our learned optimization algorithms provably outperform related algorithms based on a (deterministic) worst-case analysis. Our results rely on PAC-Bayes bounds for general, unbounded loss-functions based on exponential families. By generalizing existing ideas, we reformulate the learning procedure into a one-dimensional minimization problem and study the possibility to find a global minimum, which enables the algorithmic realization of the learning procedure. As a proof-of-concept, we learn hyperparameters of standard optimization algorithms to empirically underline our theory.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源