论文标题

内容感知扭曲以查看合成

Content-aware Warping for View Synthesis

论文作者

Guo, Mantang, Hou, Junhui, Jin, Jing, Liu, Hui, Zeng, Huanqiang, Lu, Jiwen

论文摘要

现有的基于图像的渲染方法通常采用基于深度的图像翘曲操作来综合新视图。在本文中,我们认为传统翘曲操作的基本局限性是有限的社区,并且只有基于距离的插值权重。为此,我们提出了内容感知的翘曲,该扭曲可以通过轻量级的神经网络从其上下文信息中自适应地学习相对较大邻居的像素的插值权重。基于这个可学习的翘曲模块,我们从一组输入源视图中提出了一个新的基于端到端的学习框架,用于新的视图综合框架,其中有两个其他模块,即基于信心的混合和功能相关的空间精炼,自然而然地提议处理咬合问题并捕获隔离问题并捕获空间相关性的相关性,从而相差相分化。此外,我们还提出了一个平滑度损失项,以使网络正常。具有宽基线和多视图数据集的光场数据集上的实验结果表明,所提出的方法在定量和视觉上都显着胜过最先进的方法。源代码将在https://github.com/mantangguo/cw4vs上公开获得。

Existing image-based rendering methods usually adopt depth-based image warping operation to synthesize novel views. In this paper, we reason the essential limitations of the traditional warping operation to be the limited neighborhood and only distance-based interpolation weights. To this end, we propose content-aware warping, which adaptively learns the interpolation weights for pixels of a relatively large neighborhood from their contextual information via a lightweight neural network. Based on this learnable warping module, we propose a new end-to-end learning-based framework for novel view synthesis from a set of input source views, in which two additional modules, namely confidence-based blending and feature-assistant spatial refinement, are naturally proposed to handle the occlusion issue and capture the spatial correlation among pixels of the synthesized view, respectively. Besides, we also propose a weight-smoothness loss term to regularize the network. Experimental results on light field datasets with wide baselines and multi-view datasets show that the proposed method significantly outperforms state-of-the-art methods both quantitatively and visually. The source code will be publicly available at https://github.com/MantangGuo/CW4VS.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源