论文标题
关于随机优化算法的高参数调整
On Hyper-parameter Tuning for Stochastic Optimization Algorithms
论文作者
论文摘要
本文提出了有史以来第一个基于增强学习的随机优化算法的超参数的算法框架。超参数对随机优化算法的性能产生了重大影响,例如进化算法(EAS)和荟萃术。但是,由于这些算法的随机性,确定最佳的超参数是非常耗时的。我们建议将调整过程建模为马尔可夫决策过程,并求助于策略梯度算法以调整超参数。针对不同的优化问题(连续和离散)调整不同类型的超参数(连续和离散)的随机算法的实验表明,所提出的超参数调谐算法并不需要比贝耶斯优化方法更少的随机算法运行时间。所提出的框架可以用作随机算法中超参数调整的标准工具。
This paper proposes the first-ever algorithmic framework for tuning hyper-parameters of stochastic optimization algorithm based on reinforcement learning. Hyper-parameters impose significant influences on the performance of stochastic optimization algorithms, such as evolutionary algorithms (EAs) and meta-heuristics. Yet, it is very time-consuming to determine optimal hyper-parameters due to the stochastic nature of these algorithms. We propose to model the tuning procedure as a Markov decision process, and resort the policy gradient algorithm to tune the hyper-parameters. Experiments on tuning stochastic algorithms with different kinds of hyper-parameters (continuous and discrete) for different optimization problems (continuous and discrete) show that the proposed hyper-parameter tuning algorithms do not require much less running times of the stochastic algorithms than bayesian optimization method. The proposed framework can be used as a standard tool for hyper-parameter tuning in stochastic algorithms.