论文标题

通过深场空间超级分辨率通过深层组合几何嵌入和结构一致性正则化

Light Field Spatial Super-resolution via Deep Combinatorial Geometry Embedding and Structural Consistency Regularization

论文作者

Jin, Jing, Hou, Junhui, Chen, Jie, Kwong, Sam

论文摘要

手持设备获得的光场(LF)图像通常会遭受低空间分辨率的损失,因为必须与角尺寸共享有限的采样资源。因此,LF空间超分辨率(SR)成为LF摄像机处理管道中必不可少的部分。 LF图像的高维特性和复杂的几何结构使问题比传统的单像SR更具挑战性。现有方法的性能仍然受到限制,因为它们无法彻底探索LF视图之间的连贯性,并且不足以准确保留场景的视差结构。在本文中,我们提出了一种新型的基于学习的LF空间SR框架,其中LF图像的每种视图首先是通过使用组合几何形状嵌入观点之间探索互补信息来单独超级分辨的。为了准确地保存重建视图中的视差结构,随后会附加对结构感知损失函数进行训练的正则化网络,以在中间估计中执行正确的视差关系。我们提出的方法通过数据集进行了评估,其中包括大量的测试图像,包括合成场景和现实场景。实验结果证明了我们的方法比最先进的方法的优势,即,我们的方法不仅可以将平均PSNR提高超过1.0 dB,而且还以较低的计算成本保留了更准确的视差细节。

Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution as the limited sampling resources have to be shared with the angular dimension. LF spatial super-resolution (SR) thus becomes an indispensable part of the LF camera processing pipeline. The high-dimensionality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR. The performance of existing methods is still limited as they fail to thoroughly explore the coherence among LF views and are insufficient in accurately preserving the parallax structure of the scene. In this paper, we propose a novel learning-based LF spatial SR framework, in which each view of an LF image is first individually super-resolved by exploring the complementary information among views with combinatorial geometry embedding. For accurate preservation of the parallax structure among the reconstructed views, a regularization network trained over a structure-aware loss function is subsequently appended to enforce correct parallax relationships over the intermediate estimation. Our proposed approach is evaluated over datasets with a large number of testing images including both synthetic and real-world scenes. Experimental results demonstrate the advantage of our approach over state-of-the-art methods, i.e., our method not only improves the average PSNR by more than 1.0 dB but also preserves more accurate parallax details, at a lower computational cost.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源