论文标题
使用非凸和非平滑问题的随机优化的稳定性和概括
Stability and Generalization of Stochastic Optimization with Nonconvex and Nonsmooth Problems
论文作者
论文摘要
随机优化在最小化机器学习中的目标功能方面发现了广泛的应用,这激发了许多理论研究以了解其实际成功。大多数现有研究都集中在优化误差的收敛上,而随机优化的概括分析却落后了。在实践中经常遇到的非概要和非平滑问题的情况尤其如此。在本文中,我们初始化了对非凸和非平滑问题的随机优化的系统稳定性和概括分析。我们介绍了新型算法稳定性措施,并在人口梯度和经验梯度之间建立了定量联系,然后进一步扩展以研究经验风险的莫罗(Moreau)信封与人口风险的差距。据我们所知,尚未在文献中研究稳定性与概括之间的这些定量联系。我们引入了一类采样确定的算法,为此我们为三种稳定性度量的界限开发了界限。最后,我们将这些讨论应用于随机梯度下降及其自适应变体的误差界限,我们在其中显示如何通过调整步骤大小和迭代次数来实现隐式正则化。
Stochastic optimization has found wide applications in minimizing objective functions in machine learning, which motivates a lot of theoretical studies to understand its practical success. Most of existing studies focus on the convergence of optimization errors, while the generalization analysis of stochastic optimization is much lagging behind. This is especially the case for nonconvex and nonsmooth problems often encountered in practice. In this paper, we initialize a systematic stability and generalization analysis of stochastic optimization on nonconvex and nonsmooth problems. We introduce novel algorithmic stability measures and establish their quantitative connection on the gap between population gradients and empirical gradients, which is then further extended to study the gap between the Moreau envelope of the empirical risk and that of the population risk. To our knowledge, these quantitative connection between stability and generalization in terms of either gradients or Moreau envelopes have not been studied in the literature. We introduce a class of sampling-determined algorithms, for which we develop bounds for three stability measures. Finally, we apply these discussions to derive error bounds for stochastic gradient descent and its adaptive variant, where we show how to achieve an implicit regularization by tuning the step sizes and the number of iterations.