论文标题
通过深度学习揭示不可观察的东西:生成元素提取网络(geen)
Revealing Unobservables by Deep Learning: Generative Element Extraction Networks (GEEN)
论文作者
论文摘要
潜在变量模型在科学研究中至关重要,其中关键变量(例如努力,能力和信念)在样本中未观察到,但需要识别。本文提出了一种新的方法,用于估计包含其多个测量值的随机样本中潜在变量$ x^*$的实现。通过以下关键假设:测量值是$ x^*$独立的条件,我们提供了足够的条件,在该条件下,样本中$ x^*$的实现在一类偏差中是本地唯一的,这使我们可以识别$ x^*$的实现。据我们所知,本文是第一个在观察中提供此类识别的文章。然后,我们使用有和没有条件独立性的两个概率密度之间的kullback-leibler距离作为训练生成元件提取网络(geen)的损失函数,该函数从观察到的测量值映射到样本中$ x^*$的实现。模拟结果表明,该提出的估计器效果很好,并且估计值与$ x^*$的实现高度相关。我们的估计器可以应用于大型潜在变量模型,我们希望它将改变人们处理潜在变量的方式。
Latent variable models are crucial in scientific research, where a key variable, such as effort, ability, and belief, is unobserved in the sample but needs to be identified. This paper proposes a novel method for estimating realizations of a latent variable $X^*$ in a random sample that contains its multiple measurements. With the key assumption that the measurements are independent conditional on $X^*$, we provide sufficient conditions under which realizations of $X^*$ in the sample are locally unique in a class of deviations, which allows us to identify realizations of $X^*$. To the best of our knowledge, this paper is the first to provide such identification in observation. We then use the Kullback-Leibler distance between the two probability densities with and without the conditional independence as the loss function to train a Generative Element Extraction Networks (GEEN) that maps from the observed measurements to realizations of $X^*$ in the sample. The simulation results imply that this proposed estimator works quite well and the estimated values are highly correlated with realizations of $X^*$. Our estimator can be applied to a large class of latent variable models and we expect it will change how people deal with latent variables.