论文标题

学习具有冗余输入神经网络的潜在因果结构

Learning Latent Causal Structures with a Redundant Input Neural Network

论文作者

Young, Jonathan D., Andrews, Bryan, Cooper, Gregory F., Lu, Xinghua

论文摘要

大多数因果发现算法在一组观察到的变量中发现了因果结构。学习潜在变量之间的因果结构仍然是一个重要的开放问题,尤其是在使用高维数据时。在本文中,我们解决了一个问题,该问题已知输入引起输出,这些因果关系是由一组未知数的潜在变量中的因果网络编码的。我们开发了一个深度学习模型,该模型称为冗余输入神经网络(RINN),具有修改的体系结构和正规目标函数,以找到输入,隐藏和输出变量之间的因果关系。更具体地说,我们的模型允许输入变量直接与神经网络中的所有潜在变量进行交互,以影响潜在变量应编码的信息,以便准确地生成输出变量。在这种情况下,输入和潜在变量之间的直接连接使潜在变量可以部分解释;此外,神经网络中的潜在变量之间的连通性用于模拟其潜在的因果关系,并与输出变量建模。一系列的仿真实验提供了支持,即RINN方法可以成功恢复输入和输出变量之间的潜在因果结构。

Most causal discovery algorithms find causal structure among a set of observed variables. Learning the causal structure among latent variables remains an important open problem, particularly when using high-dimensional data. In this paper, we address a problem for which it is known that inputs cause outputs, and these causal relationships are encoded by a causal network among a set of an unknown number of latent variables. We developed a deep learning model, which we call a redundant input neural network (RINN), with a modified architecture and a regularized objective function to find causal relationships between input, hidden, and output variables. More specifically, our model allows input variables to directly interact with all latent variables in a neural network to influence what information the latent variables should encode in order to generate the output variables accurately. In this setting, the direct connections between input and latent variables makes the latent variables partially interpretable; furthermore, the connectivity among the latent variables in the neural network serves to model their potential causal relationships to each other and to the output variables. A series of simulation experiments provide support that the RINN method can successfully recover latent causal structure between input and output variables.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源