论文标题

UC-NET:不确定性启发了通过条件变异自动编码器的RGB-D显着性检测

UC-Net: Uncertainty Inspired RGB-D Saliency Detection via Conditional Variational Autoencoders

论文作者

Zhang, Jing, Fan, Deng-Ping, Dai, Yuchao, Anwar, Saeed, Saleh, Fatemeh Sadat, Zhang, Tong, Barnes, Nick

论文摘要

在本文中,我们建议第一个框架(UCNET)通过从数据标记过程中学习来利用RGB-D显着性检测的不确定性。现有的RGB-D显着性检测方法将显着性检测任务视为点估计问题,并在确定性学习管道后产生单个显着性图。受到显着性数据标记过程的启发,我们通过条件变异自动编码器提出了概率RGB-D显着性检测网络,以模拟人体注释不确定性,并通过在潜在空间中进行采样为每个输入图像生成多个显着性图。通过提出的显着性共识过程,我们能够根据这些多个预测生成准确的显着图。针对18种竞争算法的六个具有挑战性的基准数据集进行了定量和定性评估,这证明了我们方法在学习显着图的分布方面的有效性,从而导致了RGB-D显着检测的新最新技术。

In this paper, we propose the first framework (UCNet) to employ uncertainty for RGB-D saliency detection by learning from the data labeling process. Existing RGB-D saliency detection methods treat the saliency detection task as a point estimation problem, and produce a single saliency map following a deterministic learning pipeline. Inspired by the saliency data labeling process, we propose probabilistic RGB-D saliency detection network via conditional variational autoencoders to model human annotation uncertainty and generate multiple saliency maps for each input image by sampling in the latent space. With the proposed saliency consensus process, we are able to generate an accurate saliency map based on these multiple predictions. Quantitative and qualitative evaluations on six challenging benchmark datasets against 18 competing algorithms demonstrate the effectiveness of our approach in learning the distribution of saliency maps, leading to a new state-of-the-art in RGB-D saliency detection.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源