论文标题

蛋白石:无监督光场差异估计的闭塞模式损失

OPAL: Occlusion Pattern Aware Loss for Unsupervised Light Field Disparity Estimation

论文作者

Li, Peng, Zhao, Jiayin, Wu, Jingyao, Deng, Chao, Wang, Haoqian, Yu, Tao

论文摘要

光场差异估计是具有各种应用程序的计算机视觉中的重要任务。尽管基于学习的基于学习的方法比传统的基于优化的方法提高了准确性和效率,但对训练的地面真实差异的依赖性限制了整体概括性能,而对现实世界中的现实情况不难说,而基地真实差异很难捕获。在本文中,我们认为无监督的方法可以达到可比的精度,但更重要的是,比监督方法更高的概括能力和效率要高得多。具体而言,我们介绍了名为Opal的遮挡模式意识损失,该损失成功地提取并编码了光场中固有的一般遮挡模式以进行损失计算。 OPAL启用:i)通过有效处理阻塞而无需使用任何基础真相信息进行训练和ii)通过显着降低准确推断所需的网络参数来实现高效性能,通过有效处理闭塞来进行准确且可靠的估计。此外,提出了一个基于变压器的网络和改进模块,以实现更准确的结果。广泛的实验证明了我们的方法不仅显着提高了与SOTA无监督方法相比的准确性,而且与监督方法相比,即使对于实际数据,也具有强大的概括能力。我们的代码将公开可用。

Light field disparity estimation is an essential task in computer vision with various applications. Although supervised learning-based methods have achieved both higher accuracy and efficiency than traditional optimization-based methods, the dependency on ground-truth disparity for training limits the overall generalization performance not to say for real-world scenarios where the ground-truth disparity is hard to capture. In this paper, we argue that unsupervised methods can achieve comparable accuracy, but, more importantly, much higher generalization capacity and efficiency than supervised methods. Specifically, we present the Occlusion Pattern Aware Loss, named OPAL, which successfully extracts and encodes the general occlusion patterns inherent in the light field for loss calculation. OPAL enables: i) accurate and robust estimation by effectively handling occlusions without using any ground-truth information for training and ii) much efficient performance by significantly reducing the network parameters required for accurate inference. Besides, a transformer-based network and a refinement module are proposed for achieving even more accurate results. Extensive experiments demonstrate our method not only significantly improves the accuracy compared with the SOTA unsupervised methods, but also possesses strong generalization capacity, even for real-world data, compared with supervised methods. Our code will be made publicly available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源