论文标题
一个方便的无限尺寸框架,用于生成对抗性学习
A Convenient Infinite Dimensional Framework for Generative Adversarial Learning
论文作者
论文摘要
近年来,生成的对抗网络(GAN)表现出了令人印象深刻的实验结果,而只有少数作品可以促进GAN的统计学习理论。在这项工作中,我们为生成对抗性学习提出了一个无限的维度理论框架。我们假设基础度量的概率密度函数是均匀界限的,$ k $ -times $α$-Hölder可区分($ c^{k,α} $),并均匀地远离零。在这些假设下,我们表明Rosenblatt转换诱导了最佳发生器,在$ C^{K,α} $ - 生成器的假设空间中可以实现。通过对歧视者的假设空间的一致定义,我们进一步表明,发电机引起的分布从对抗性学习过程中引起的分布与数据生成分布的分布之间的分布在零中转化为零。在对数据生成过程密度的某些规律性假设下,我们还基于链条和集中度提供了收敛速度。
In recent years, generative adversarial networks (GANs) have demonstrated impressive experimental results while there are only a few works that foster statistical learning theory for GANs. In this work, we propose an infinite dimensional theoretical framework for generative adversarial learning. We assume that the probability density functions of the underlying measure are uniformly bounded, $k$-times $α$-Hölder differentiable ($C^{k,α}$) and uniformly bounded away from zero. Under these assumptions, we show that the Rosenblatt transformation induces an optimal generator, which is realizable in the hypothesis space of $C^{k,α}$-generators. With a consistent definition of the hypothesis space of discriminators, we further show that the Jensen-Shannon divergence between the distribution induced by the generator from the adversarial learning procedure and the data generating distribution converges to zero. Under certain regularity assumptions on the density of the data generating process, we also provide rates of convergence based on chaining and concentration.