论文标题
一些对Wasserstein Gans的理论见解
Some Theoretical Insights into Wasserstein GANs
论文作者
论文摘要
生成的对抗网络(GAN)在图像,视频和文本生成等各种领域取得了成功的结果。在这些成功的基础上,大量的实证研究证实了表弟方法称为Wasserstein Gans(WGANS)的好处,该方法在训练过程中稳定下来。在本文中,我们通过提出一些理论进步在WGAN的性质中提出一些理论进步,为大厦添加了一块新的石头。首先,我们在通过神经网络参数参数的积分概率指标的背景下正确定义了WGAN的体系结构,并突出了其一些基本的数学特征。我们在使用参数1-lipschitz歧视器引起的特别有趣的优化属性中强调。然后,在一种统计驱动的方法中,我们研究了经验WGAN的收敛性,因为样本量倾向于无穷大,并通过强调某些权衡特性来阐明发生器和歧视者的对抗性效应。最终使用合成数据集和现实世界数据集的实验来说明这些功能。
Generative Adversarial Networks (GANs) have been successful in producing outstanding results in areas as diverse as image, video, and text generation. Building on these successes, a large number of empirical studies have validated the benefits of the cousin approach called Wasserstein GANs (WGANs), which brings stabilization in the training process. In the present paper, we add a new stone to the edifice by proposing some theoretical advances in the properties of WGANs. First, we properly define the architecture of WGANs in the context of integral probability metrics parameterized by neural networks and highlight some of their basic mathematical features. We stress in particular interesting optimization properties arising from the use of a parametric 1-Lipschitz discriminator. Then, in a statistically-driven approach, we study the convergence of empirical WGANs as the sample size tends to infinity, and clarify the adversarial effects of the generator and the discriminator by underlining some trade-off properties. These features are finally illustrated with experiments using both synthetic and real-world datasets.