论文标题
RBM可以接受零步的对比度差异训练吗?
Can RBMs be trained with zero step contrastive divergence?
论文作者
论文摘要
受限的玻尔兹曼机器(RBMS)是概率生成模型,可以原则上的最大似然训练,但通常在实践中被称为对比度差异(CD)的近似算法训练。通常,CD-K算法使用从K-Step Markov Chain Monte Carlo算法获得的样品(例如,Block Gibbs采样)估计模型分布的平均值。 K的选择通常从1到100不等。本技术报告探讨了使用CD修改版本的简单近似采样算法利用CD的简单近似采样算法,以便训练具有K = 0的RBM。像往常一样,该方法在MNIST上进行了说明。
Restricted Boltzmann Machines (RBMs) are probabilistic generative models that can be trained by maximum likelihood in principle, but are usually trained by an approximate algorithm called Contrastive Divergence (CD) in practice. In general, a CD-k algorithm estimates an average with respect to the model distribution using a sample obtained from a k-step Markov Chain Monte Carlo Algorithm (e.g., block Gibbs sampling) starting from some initial configuration. Choices of k typically vary from 1 to 100. This technical report explores if it's possible to leverage a simple approximate sampling algorithm with a modified version of CD in order to train an RBM with k=0. As usual, the method is illustrated on MNIST.