论文标题

模态不确定性通过离散潜在表示

Modal Uncertainty Estimation via Discrete Latent Representation

论文作者

Qiu, Di, Lui, Lok Ming

论文摘要

现实世界中的许多重要问题没有独特的解决方案。因此,对于机器学习模型,重要的是能够提出具有有意义概率措施的不同合理解决方案。在这项工作中,我们介绍了一个深入的学习框架,该框架了解了投入和输出之间的一对多映射以及忠实的不确定性措施。我们称我们的框架{\ IT模态不确定性估计},因为我们对要通过一组离散的潜在变量生成的一对多映射进行了建模,每个映射都代表了一个潜在模式假设,该假设解释了相应的输入输出 - 输出关系类型。因此,潜在表示的离散性质使我们能够非常有效地估算输入的任何输入。在培训期间,共同学习了离散的潜在空间及其不确定性估计。我们激励我们通过当前有条件生成模型的多模式后塌陷问题来激励我们对离散潜在空间的使用,然后开发理论背景,并广泛验证我们在合成和现实任务上的方法。与当前的最新方法相比,我们的框架显示出更准确的不确定性估计,并且在实际使用方面提供了信息和方便。

Many important problems in the real world don't have unique solutions. It is thus important for machine learning models to be capable of proposing different plausible solutions with meaningful probability measures. In this work we introduce such a deep learning framework that learns the one-to-many mappings between the inputs and outputs, together with faithful uncertainty measures. We call our framework {\it modal uncertainty estimation} since we model the one-to-many mappings to be generated through a set of discrete latent variables, each representing a latent mode hypothesis that explains the corresponding type of input-output relationship. The discrete nature of the latent representations thus allows us to estimate for any input the conditional probability distribution of the outputs very effectively. Both the discrete latent space and its uncertainty estimation are jointly learned during training. We motivate our use of discrete latent space through the multi-modal posterior collapse problem in current conditional generative models, then develop the theoretical background, and extensively validate our method on both synthetic and realistic tasks. Our framework demonstrates significantly more accurate uncertainty estimation than the current state-of-the-art methods, and is informative and convenient for practical use.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源