论文标题

关于分布式块近端法的线性收敛速率

On the Linear Convergence Rate of the Distributed Block Proximal Method

论文作者

Farina, Francesco, Notarstefano, Giuseppe

论文摘要

在本文的假设下,本文研究了最近开发的分布式块近端方法,用于求解随机大数据凸优化问题,并在恒定的步骤和强烈凸(可能是非平滑的)局部目标函数下进行了研究。这类问题出现在许多学习和分类问题中,例如,在目标函数中包括强大的正规化功能,决策变量极高,并且采用了大型数据集。该算法通过稳定的更新和代理之间的通信产生本地估计。就成本值而言,距(全局)最佳的预期距离可线性衰减至与所选局部步骤成正比的恒定值。涉及分类问题的数值示例证实了理论结果。

The recently developed Distributed Block Proximal Method, for solving stochastic big-data convex optimization problems, is studied in this paper under the assumption of constant stepsizes and strongly convex (possibly non-smooth) local objective functions. This class of problems arises in many learning and classification problems in which, for example, strongly-convex regularizing functions are included in the objective function, the decision variable is extremely high dimensional, and large datasets are employed. The algorithm produces local estimates by means of block-wise updates and communication among the agents. The expected distance from the (global) optimum, in terms of cost value, is shown to decay linearly to a constant value which is proportional to the selected local stepsizes. A numerical example involving a classification problem corroborates the theoretical results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源