论文标题

通过局部扭曲度量预测深度学习的概括

Predicting Generalization in Deep Learning via Local Measures of Distortion

论文作者

Rajagopal, Abhejit, Madala, Vamshi C., Chandrasekaran, Shivkumar, Larson, Peder E. Z.

论文摘要

我们通过呼吁最初在近似和信息理论中开发的复杂性度量来研究深度学习的概括。尽管这些概念受到深度学习的高维和数据定义的性质的挑战,但我们表明,当将PCA,GMM和SVMS等简单的向量量化方法捕捉到当将其层次置于深层提取的特征上,从而促进了它们的精神,从而提高了相对廉价的复杂性测量,从而使其与一般性良好相关。我们讨论了2020年神经PGDL挑战的结果。

We study generalization in deep learning by appealing to complexity measures originally developed in approximation and information theory. While these concepts are challenged by the high-dimensional and data-defined nature of deep learning, we show that simple vector quantization approaches such as PCA, GMMs, and SVMs capture their spirit when applied layer-wise to deep extracted features giving rise to relatively inexpensive complexity measures that correlate well with generalization performance. We discuss our results in 2020 NeurIPS PGDL challenge.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源