论文标题

深度生成的混合:从嘈杂的Subgaussian混合物中恢复Lipschitz信号

Deep generative demixing: Recovering Lipschitz signals from noisy subgaussian mixtures

论文作者

Berk, Aaron

论文摘要

生成神经网络(GNN)已获得有效捕获自然图像中内在的低维结构的知名度。在这里,我们调查了两个Lipschitz信号的Subgaussian解散问题,GNN将其作为一种特殊情况。在解散时,一个人寻求鉴定两个信号的总和和先前的结构信息。在这里,我们假设每个信号都在Lipschitz函数的范围内,其中包括许多流行的GNN作为一种特殊情况。我们证明了一个几乎最佳恢复误差的样本复杂性,该误差扩展了Bora等人的最新结果。 (2017年)从带有高斯矩阵的压缩传感设置到与Subgaussian的压缩设置。在信号中位于凸集中的线性信号模型下,McCoy&Tropp(2014)表征了在Subgaussian混合下识别样品复杂性。在当前的环境中,信号结构不必是凸。例如,我们的结果适用于凸锥的非凸线结合的域。我们通过使用训练有素的GNN来支持这种解混合模型的功效,这表明一种算法将是进一步理论研究的有趣对象。

Generative neural networks (GNNs) have gained renown for efficaciously capturing intrinsic low-dimensional structure in natural images. Here, we investigate the subgaussian demixing problem for two Lipschitz signals, with GNN demixing as a special case. In demixing, one seeks identification of two signals given their sum and prior structural information. Here, we assume each signal lies in the range of a Lipschitz function, which includes many popular GNNs as a special case. We prove a sample complexity bound for nearly optimal recovery error that extends a recent result of Bora, et al. (2017) from the compressed sensing setting with gaussian matrices to demixing with subgaussian ones. Under a linear signal model in which the signals lie in convex sets, McCoy & Tropp (2014) have characterized the sample complexity for identification under subgaussian mixing. In the present setting, the signal structure need not be convex. For example, our result applies to a domain that is a non-convex union of convex cones. We support the efficacy of this demixing model with numerical simulations using trained GNNs, suggesting an algorithm that would be an interesting object of further theoretical study.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源