论文标题
与马尔可夫数据集有关深度学习的概括错误界限
Generalization Error Bounds on Deep Learning with Markov Datasets
论文作者
论文摘要
在本文中,我们在具有马尔可夫数据集的深神经网络的概括错误上得出了上限。这些界限是基于Koltchinskii和Panchenko的方法来开发的,该方法将组合分类器的概括误差与I.I.D界定。数据集。马尔可夫链的高维概率中新的对称不平等的发展是我们扩展中的关键要素,其中马尔可夫链的无限发电机的光谱间隙在这些不等式中起着关键参数。我们还提出了一种简单的方法,将这些界限转换为传统的深度学习和机器学习中的其他类似界限,以将贝叶斯的同行转换为I.I.D.的贝叶斯对应物。和马尔可夫数据集。给出了$ M $ M $ - 订单同质的马尔可夫连锁店,例如AR和ARMA型号,以及Markov Data Services的混合物。
In this paper, we derive upper bounds on generalization errors for deep neural networks with Markov datasets. These bounds are developed based on Koltchinskii and Panchenko's approach for bounding the generalization error of combined classifiers with i.i.d. datasets. The development of new symmetrization inequalities in high-dimensional probability for Markov chains is a key element in our extension, where the spectral gap of the infinitesimal generator of the Markov chain plays a key parameter in these inequalities. We also propose a simple method to convert these bounds and other similar ones in traditional deep learning and machine learning to Bayesian counterparts for both i.i.d. and Markov datasets. Extensions to $m$-order homogeneous Markov chains such as AR and ARMA models and mixtures of several Markov data services are given.