论文标题
潜在表示空间中的多模式传感器融合
Multimodal sensor fusion in the latent representation space
论文作者
论文摘要
引入了一种新的多模式传感器融合方法。该技术依赖于两个阶段的过程。在第一阶段,由未标记的训练数据构建了多模式生成模型。在第二阶段,生成模型是先验的重建和传感器融合任务的搜索歧管。该方法还处理仅通过亚采样即可(即压缩传感)访问观测值的情况。我们在一系列多模式融合实验(例如多感官分类,降解和从中采样观测值中恢复)上展示了有效性和出色的性能。
A new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search manifold for the sensor fusion tasks. The method also handles cases where observations are accessed only via subsampling i.e. compressed sensing. We demonstrate the effectiveness and excellent performance on a range of multimodal fusion experiments such as multisensory classification, denoising, and recovery from subsampled observations.