论文标题

分裂和构成ERM的理论分析:超越正方形损失和RKHS

Theoretical Analysis of Divide-and-Conquer ERM: Beyond Square Loss and RKHS

论文作者

Liu, Yong, Ding, Lizhong, Wang, Weiping

论文摘要

最近在学习理论框架内探索了对基于分裂和诱因的分布式学习的理论分析。但是,关于一般损失功能和假设空间的学习理论的研究仍然有限。为了填补空白,我们研究了一般损失函数和假设空间的分布式经验风险最小化(ERM)的风险性能。主要贡献是两个方面。首先,我们在假设空间的某些基本假设以及平滑度,Lipschitz连续性,损失函数的强凸度的平稳性,Lipschitz的连续性,较强的凸度下得出了两个紧密的风险界限。其次,我们进一步为分布式ERM开发了更一般的风险,而无需限制强凸度。

Theoretical analysis of the divide-and-conquer based distributed learning with least square loss in the reproducing kernel Hilbert space (RKHS) have recently been explored within the framework of learning theory. However, the studies on learning theory for general loss functions and hypothesis spaces remain limited. To fill the gap, we study the risk performance of distributed empirical risk minimization (ERM) for general loss functions and hypothesis spaces. The main contributions are two-fold. First, we derive two tight risk bounds under certain basic assumptions on the hypothesis space, as well as the smoothness, Lipschitz continuity, strong convexity of the loss function. Second, we further develop a more general risk bound for distributed ERM without the restriction of strong convexity.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源