论文标题
GAT-GMM:高斯混合模型的生成对抗训练
GAT-GMM: Generative Adversarial Training for Gaussian Mixture Models
论文作者
论文摘要
生成对抗网络(GAN)通过两个机器玩家,一个发电机和一个歧视器之间的零和游戏来了解观察到的样品的分布。尽管甘斯在学习图像,声音和文本数据的复杂分布方面取得了巨大的成功,但它们在学习多模式分布学习基准(包括高斯混合物模型(GMM))方面表现出色。在本文中,我们提出了用于高斯混合模型(GAT-GMM)的生成对抗训练,这是一种学习GMM的最小GAN框架。在最佳运输理论的推动下,我们使用随机线性生成器和基于软max的二次判别架构在GAT-GMM中设计零和游戏,这导致了非convex凹入的最小值优化问题。我们表明,梯度下降(GDA)方法会收敛到GAT-GMM优化问题的近似固定最小点。在两个对称,分离良好的高斯人的混合物的基准情况下,我们进一步显示了这个固定点恢复了基础GMM的真实参数。我们通过执行多个实验在数字上支持我们的理论发现,这表明GAT-GMM可以在学习两个高斯人学习混合物中的期望 - 最大化算法。
Generative adversarial networks (GANs) learn the distribution of observed samples through a zero-sum game between two machine players, a generator and a discriminator. While GANs achieve great success in learning the complex distribution of image, sound, and text data, they perform suboptimally in learning multi-modal distribution-learning benchmarks including Gaussian mixture models (GMMs). In this paper, we propose Generative Adversarial Training for Gaussian Mixture Models (GAT-GMM), a minimax GAN framework for learning GMMs. Motivated by optimal transport theory, we design the zero-sum game in GAT-GMM using a random linear generator and a softmax-based quadratic discriminator architecture, which leads to a non-convex concave minimax optimization problem. We show that a Gradient Descent Ascent (GDA) method converges to an approximate stationary minimax point of the GAT-GMM optimization problem. In the benchmark case of a mixture of two symmetric, well-separated Gaussians, we further show this stationary point recovers the true parameters of the underlying GMM. We numerically support our theoretical findings by performing several experiments, which demonstrate that GAT-GMM can perform as well as the expectation-maximization algorithm in learning mixtures of two Gaussians.