论文标题

随机梯度下降符合分布回归

Stochastic Gradient Descent Meets Distribution Regression

论文作者

Mücke, Nicole

论文摘要

随机梯度下降(SGD)提供了一种简单有效的方法来解决广泛的机器学习问题。在这里,我们专注于分布回归(DR),涉及两个采样阶段:首先,我们从概率度量转换为实现的响应。其次,我们从这些分布中采样袋子来利用它们来解决整体回归问题。最近,通过应用内核岭回归来解决DR,并且对这种方法的学习特性得到了充分的了解。但是,对于两个阶段抽样问题的SGD的学习特性一无所知。我们填补了这一空白,并为DR的SGD提供了理论保证。根据标准假设,我们的边界在迷你最大意义上是最佳的。

Stochastic gradient descent (SGD) provides a simple and efficient way to solve a broad range of machine learning problems. Here, we focus on distribution regression (DR), involving two stages of sampling: Firstly, we regress from probability measures to real-valued responses. Secondly, we sample bags from these distributions for utilizing them to solve the overall regression problem. Recently, DR has been tackled by applying kernel ridge regression and the learning properties of this approach are well understood. However, nothing is known about the learning properties of SGD for two stage sampling problems. We fill this gap and provide theoretical guarantees for the performance of SGD for DR. Our bounds are optimal in a mini-max sense under standard assumptions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源