论文标题
集合上高斯矩阵:最佳的尾巴依赖性和应用
Sub-Gaussian Matrices on Sets: Optimal Tail Dependence and Applications
论文作者
论文摘要
随机线性映射被广泛用于现代信号处理,压缩传感和机器学习。这些映射可用于将数据嵌入显着较低的维度,同时保存有用的信息。这是通过大约保留数据点之间的距离来完成的,该数据点属于$ \ Mathbb {r}^n $。因此,这些映射的性能通常是由于它们与数据上的等轴测图的距离所捕获的。高斯线性映射一直是许多研究的对象,而次高斯的环境尚未完全理解。在后一种情况下,性能取决于行的次高标准。在许多应用中,例如压缩感应,该规范可能很大,甚至可以随维度生长,因此表征这种依赖性很重要。 我们研究何时在集合上成为高斯矩阵近期的静脉测定,表明先前最知名的对高斯范围的依赖性是次优的,并且呈现出最佳依赖性。我们的结果不仅回答了LIAW,Mehrabian,Plan和Vershynin在2017年提出的剩余问题,而且还概括了他们的工作。我们还针对亚指数随机变量开发了一种新的伯恩斯坦类型不平等,以及针对亚高斯随机变量的二次形式的新的汉森 - 赖特不等式,在这两种情况下,在矩限制下都改善了高斯级范围的界限。最后,我们说明了流行的应用程序,例如Johnson-Lindenstrauss嵌入,0-1矩阵的无空间属性,随机草图和盲解释,我们的结果可以改善其理论保证(在高斯案例中)。
Random linear mappings are widely used in modern signal processing, compressed sensing and machine learning. These mappings may be used to embed the data into a significantly lower dimension while at the same time preserving useful information. This is done by approximately preserving the distances between data points, which are assumed to belong to $\mathbb{R}^n$. Thus, the performance of these mappings is usually captured by how close they are to an isometry on the data. Gaussian linear mappings have been the object of much study, while the sub-Gaussian settings is not yet fully understood. In the latter case, the performance depends on the sub-Gaussian norm of the rows. In many applications, e.g., compressed sensing, this norm may be large, or even growing with dimension, and thus it is important to characterize this dependence. We study when a sub-Gaussian matrix can become a near isometry on a set, show that previous best known dependence on the sub-Gaussian norm was sub-optimal, and present the optimal dependence. Our result not only answers a remaining question posed by Liaw, Mehrabian, Plan and Vershynin in 2017, but also generalizes their work. We also develop a new Bernstein type inequality for sub-exponential random variables, and a new Hanson-Wright inequality for quadratic forms of sub-Gaussian random variables, in both cases improving the bounds in the sub-Gaussian regime under moment constraints. Finally, we illustrate popular applications such as Johnson-Lindenstrauss embeddings, null space property for 0-1 matrices, randomized sketches and blind demodulation, whose theoretical guarantees can be improved by our results (in the sub-Gaussian case).