论文标题
学习解开否定和不确定性的表示
Learning Disentangled Representations of Negation and Uncertainty
论文作者
论文摘要
否定和不确定性建模是自然语言处理中的长期任务。语言理论假定否定和不确定性的表达在语义上彼此独立及其修改的内容。但是,先前关于表示学习的工作并不能明确地对此独立性进行建模。因此,我们试图使用变异自动编码器来解散否定,不确定性和内容的表示。我们发现,简单地监督潜在表示会导致良好的分解,但是基于对抗性学习和相互信息最小化的辅助目标可以提供额外的分解收益。
Negation and uncertainty modeling are long-standing tasks in natural language processing. Linguistic theory postulates that expressions of negation and uncertainty are semantically independent from each other and the content they modify. However, previous works on representation learning do not explicitly model this independence. We therefore attempt to disentangle the representations of negation, uncertainty, and content using a Variational Autoencoder. We find that simply supervising the latent representations results in good disentanglement, but auxiliary objectives based on adversarial learning and mutual information minimization can provide additional disentanglement gains.