论文标题
模式惩罚生成的对抗网络,具有适应性的自动编码器
Mode Penalty Generative Adversarial Network with adapted Auto-encoder
论文作者
论文摘要
生成对抗网络(GAN)经过训练以生成兴趣分布的样本图像。为此,GAN的发电机网络通过候选人生成的样本从分类中学习真实数据集的隐式分布。最近,各种甘斯提出了稳定优化其网络的新思想。但是,在实际实施中,有时它们仍然代表了真实分布或无法收敛的唯一狭窄部分。我们认为这个不良问题的问题来自差异差的梯度,歧视者的客观功能很容易将发生器捕获在不良情况下。为了解决这个问题,我们提出了一种模式惩罚,结合了预训练的自动编码器,以明确表示编码空间中生成的和真实的数据样本。在这个空间中,我们通过找到目标分布的整个模式来制作一个发电机歧管,以遵循实际的歧管。此外,对发电机的未发现目标分布模式的罚款鼓励其找到整体目标分布。我们证明,将提出的方法应用于gans有助于生成器的优化变得更加稳定,并通过实验评估更快地收敛。
Generative Adversarial Networks (GAN) are trained to generate sample images of interest distribution. To this end, generator network of GAN learns implicit distribution of real data set from the classification with candidate generated samples. Recently, various GANs have suggested novel ideas for stable optimizing of its networks. However, in real implementation, sometimes they still represent a only narrow part of true distribution or fail to converge. We assume this ill posed problem comes from poor gradient from objective function of discriminator, which easily trap the generator in a bad situation. To address this problem, we propose a mode penalty GAN combined with pre-trained auto encoder for explicit representation of generated and real data samples in the encoded space. In this space, we make a generator manifold to follow a real manifold by finding entire modes of target distribution. In addition, penalty for uncovered modes of target distribution is given to the generator which encourages it to find overall target distribution. We demonstrate that applying the proposed method to GANs helps generator's optimization becoming more stable and having faster convergence through experimental evaluations.