论文标题
自适应学习SGD的最佳批次大小
Adaptive Learning of the Optimal Batch Size of SGD
论文作者
论文摘要
对SGD的理论理解的最新进展导致了最佳批量尺寸的公式,以最大程度地减少有效数据的数量,即迭代次数时间批次大小。但是,该公式没有实践值,因为它取决于最佳评估的随机梯度方差的知识。在本文中,我们设计了一种实用的SGD方法,能够在其迭代过程中适应最佳的批量尺寸,以强烈凸出和平滑功能。我们的方法可以做到这一点,并且在我们的合成和真实数据的实验中,鲁棒表现出了几乎最佳的行为。也就是说,它好像最佳批量大小是已知的A-Priori一样。此外,我们将我们的方法推广到以前未考虑的几种新批处理策略,包括适合分布式实施的抽样。
Recent advances in the theoretical understanding of SGD led to a formula for the optimal batch size minimizing the number of effective data passes, i.e., the number of iterations times the batch size. However, this formula is of no practical value as it depends on the knowledge of the variance of the stochastic gradients evaluated at the optimum. In this paper we design a practical SGD method capable of learning the optimal batch size adaptively throughout its iterations for strongly convex and smooth functions. Our method does this provably, and in our experiments with synthetic and real data robustly exhibits nearly optimal behaviour; that is, it works as if the optimal batch size was known a-priori. Further, we generalize our method to several new batch strategies not considered in the literature before, including a sampling suitable for distributed implementations.