论文标题

坐标不是孤独的 - 代码书先验有助于隐式神经3D表示

Coordinates Are NOT Lonely -- Codebook Prior Helps Implicit Neural 3D Representations

论文作者

Yin, Fukun, Liu, Wen, Huang, Zilong, Cheng, Pei, Chen, Tao, YU, Gang

论文摘要

隐式神经3D表示在表面或场景重建和新型视图综合方面取得了令人印象深刻的结果,该综合通常使用基于坐标的多层感知器(MLP)来学习连续的场景表示。但是,现有的方法,例如神经辐射场(NERF)及其变体,通常需要致密的输入视图(即50-150)才能获得体面的结果。为了重新依赖大规模校准图像并丰富了基于坐标的特征表示,我们探索将先前的信息注入基于坐标的网络,并引入一个基于坐标的新型Coco-Inr,以实现隐式神经3D表示。我们方法的核心是两个注意模块:密码书注意和协调注意力。前者从先前的代码书中提取包含丰富几何形状和外观信息的有用原型,后者将这些先前的信息传播到每个坐标中,并丰富场景或对象表面的特征表示。借助先前的信息,我们的方法可以使用更少的可用校准图像来呈现带有光真逼真的外观和几何形状的3D视图。各种场景重建数据集(包括DTU和BlendenDMV)以及完整的3D头部重建数据集H3DS的实验表明,在更少的输入视图和我们所提出的方法的细节可保留功能下,稳健性。

Implicit neural 3D representation has achieved impressive results in surface or scene reconstruction and novel view synthesis, which typically uses the coordinate-based multi-layer perceptrons (MLPs) to learn a continuous scene representation. However, existing approaches, such as Neural Radiance Field (NeRF) and its variants, usually require dense input views (i.e. 50-150) to obtain decent results. To relive the over-dependence on massive calibrated images and enrich the coordinate-based feature representation, we explore injecting the prior information into the coordinate-based network and introduce a novel coordinate-based model, CoCo-INR, for implicit neural 3D representation. The cores of our method are two attention modules: codebook attention and coordinate attention. The former extracts the useful prototypes containing rich geometry and appearance information from the prior codebook, and the latter propagates such prior information into each coordinate and enriches its feature representation for a scene or object surface. With the help of the prior information, our method can render 3D views with more photo-realistic appearance and geometries than the current methods using fewer calibrated images available. Experiments on various scene reconstruction datasets, including DTU and BlendedMVS, and the full 3D head reconstruction dataset, H3DS, demonstrate the robustness under fewer input views and fine detail-preserving capability of our proposed method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源