论文标题
从统计学习的角度来看,压缩感测和神经网络
Compressive Sensing and Neural Networks from a Statistical Learning Perspective
论文作者
论文摘要
可以作为神经网络展开各种反向问题的迭代重建算法。从经验上讲,这种方法经常导致结果改善,但理论保证仍然很少。尽管已经在神经网络的概括属性方面取得了一些进展,但仍然存在巨大的挑战。在本章中,我们讨论并结合了这些主题,以提供一类适合从几个线性测量的稀疏重建的神经网络的概括误差分析。所考虑的假设类是受经典迭代软阈值算法(ISTA)的启发。该类别的神经网络是通过展开ISTA的迭代并学习一些权重来获得的。基于培训样本,我们旨在通过经验风险最小化学习最佳网络参数,从而从其压缩线性测量中重建信号的最佳网络。特别是,我们可能会学习一个稀疏基础,这些基础是所有迭代/层都共享的,从而获得了一种新的词典学习方法。对于这类网络,我们提出了一个概括结合,该结合基于通过Dudley的积分来界定由此类深网络组成的假设类别的Rademacher复杂性。值得注意的是,在现实的条件下,概括误差仅在层数中对数缩放,最多最多线性的测量数。
Various iterative reconstruction algorithms for inverse problems can be unfolded as neural networks. Empirically, this approach has often led to improved results, but theoretical guarantees are still scarce. While some progress on generalization properties of neural networks have been made, great challenges remain. In this chapter, we discuss and combine these topics to present a generalization error analysis for a class of neural networks suitable for sparse reconstruction from few linear measurements. The hypothesis class considered is inspired by the classical iterative soft-thresholding algorithm (ISTA). The neural networks in this class are obtained by unfolding iterations of ISTA and learning some of the weights. Based on training samples, we aim at learning the optimal network parameters via empirical risk minimization and thereby the optimal network that reconstructs signals from their compressive linear measurements. In particular, we may learn a sparsity basis that is shared by all of the iterations/layers and thereby obtain a new approach for dictionary learning. For this class of networks, we present a generalization bound, which is based on bounding the Rademacher complexity of hypothesis classes consisting of such deep networks via Dudley's integral. Remarkably, under realistic conditions, the generalization error scales only logarithmically in the number of layers, and at most linear in number of measurements.