论文标题

具有多分辨率组拉索的高维添加模型中的自适应估计

Adaptive Estimation In High-Dimensional Additive Models With Multi-Resolution Group Lasso

论文作者

Yao, Yisha, Zhang, Cun-Hui

论文摘要

在具有许多非参数组件的加性模型中,已经提出了许多正则化估计器并被证明在稀疏性和固定平滑度条件的不同组合下达到了各种误差界。这些误差范围中的一些与相应设置中的最小速率匹配。一些速率最小方法是非凸的,计算上的昂贵。从这些角度来看,现有的高维添加剂非参数回归问题的解决方案是分散的。在本文中,我们提出了一种多分辨率组拉索(MR-GL)方法,以一种统一的方法同时实现或提高现有误差界限,并提供新的误差界限,而不必了解稀疏度或未知函数的平滑度的程度。当可以将预测因子视为常数时,就建立了这种自适应收敛速率。此外,我们证明,可以根据受限特征值或兼容系数界定的预测因子确实可以视为在几乎最佳的样本尺寸条件下的随机设计的常数。

In additive models with many nonparametric components, a number of regularized estimators have been proposed and proven to attain various error bounds under different combinations of sparsity and fixed smoothness conditions. Some of these error bounds match minimax rates in the corresponding settings. Some of the rate minimax methods are non-convex and computationally costly. From these perspectives, the existing solutions to the high-dimensional additive nonparametric regression problem are fragmented. In this paper, we propose a multi-resolution group Lasso (MR-GL) method in a unified approach to simultaneously achieve or improve existing error bounds and provide new ones without the knowledge of the level of sparsity or the degree of smoothness of the unknown functions. Such adaptive convergence rates are established when a prediction factor can be treated as a constant. Furthermore, we prove that the prediction factor, which can be bounded in terms of a restricted eigenvalue or a compatibility coefficient, can be indeed treated as a constant for random designs under a nearly optimal sample size condition.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源