论文标题

在统计估计中量化重尾数据:(接近)最小值,协变量量化和均匀恢复

Quantizing Heavy-tailed Data in Statistical Estimation: (Near) Minimax Rates, Covariate Quantization, and Uniform Recovery

论文作者

Chen, Junren, Ng, Michael K., Wang, Di

论文摘要

本文在某些基本的统计估计问题中研究了重尾数据的量化,在某些基本的统计估计问题中,基础分布的某些顺序有限。我们建议在进行统一量化之前截断并正确抖动数据。我们的主要观点是,(接近)最小值的估计误差率仅来自提议方案产生的量化数据。特别是,为了进行协方差估计,压缩传感和矩阵完成制定具体结果,所有这些都同意量化仅会使乘法因子略有恶化。此外,我们研究了压缩传感,在量化协方差(即传感载体)和响应的位置。在协变量量化下,尽管我们的恢复程序是非凸态,因为协方差矩阵估计器缺乏正定的半定义性,但事实证明,所有局部最小化器都具有接近最佳的误差。此外,由于产品过程的浓度不平等和涵盖论证,我们建立了在最小值均匀恢复的附近,可用于用重尾噪声进行量化压缩传感。

This paper studies the quantization of heavy-tailed data in some fundamental statistical estimation problems, where the underlying distributions have bounded moments of some order. We propose to truncate and properly dither the data prior to a uniform quantization. Our major standpoint is that (near) minimax rates of estimation error are achievable merely from the quantized data produced by the proposed scheme. In particular, concrete results are worked out for covariance estimation, compressed sensing, and matrix completion, all agreeing that the quantization only slightly worsens the multiplicative factor. Besides, we study compressed sensing where both covariate (i.e., sensing vector) and response are quantized. Under covariate quantization, although our recovery program is non-convex because the covariance matrix estimator lacks positive semi-definiteness, all local minimizers are proved to enjoy near optimal error bound. Moreover, by the concentration inequality of product process and covering argument, we establish near minimax uniform recovery guarantee for quantized compressed sensing with heavy-tailed noise.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源