论文标题
通过张开的直通估计器训练离散的深层生成模型
Training Discrete Deep Generative Models via Gapped Straight-Through Estimator
论文作者
论文摘要
尽管深层生成模型在图像处理,自然语言处理和强化学习方面已经成功,但由于其梯度估计过程的较高差异,涉及离散随机变量的培训仍然具有挑战性。蒙特卡洛是大多数降低方法中使用的常见解决方案。但是,这涉及耗时的重采样和多功能评估。我们提出了一个张开的直通(GST)估计器,以减少方差,而不会产生重新采样开销。该估计器的灵感来自直通牙龈 - 软胶的基本属性。我们确定这些特性,并通过消融研究表明它们是必不可少的。实验表明,与在两个离散的深层生成建模任务:MNist-VAE和ListOps上相比,提出的GST估计器的性能相比具有更好的性能。
While deep generative models have succeeded in image processing, natural language processing, and reinforcement learning, training that involves discrete random variables remains challenging due to the high variance of its gradient estimation process. Monte Carlo is a common solution used in most variance reduction approaches. However, this involves time-consuming resampling and multiple function evaluations. We propose a Gapped Straight-Through (GST) estimator to reduce the variance without incurring resampling overhead. This estimator is inspired by the essential properties of Straight-Through Gumbel-Softmax. We determine these properties and show via an ablation study that they are essential. Experiments demonstrate that the proposed GST estimator enjoys better performance compared to strong baselines on two discrete deep generative modeling tasks, MNIST-VAE and ListOps.