论文标题

超越RGB:与神经辐射场的场景杂质合成

Beyond RGB: Scene-Property Synthesis with Neural Radiance Fields

论文作者

Zhang, Mingtong, Zheng, Shuhong, Bao, Zhipeng, Hebert, Martial, Wang, Yu-Xiong

论文摘要

在几何和语义上,全面的3D场景理解对于机器人感知等现实世界应用都很重要。现有的大多数工作都集中在开发以数据为导向的歧视模型来理解的情况下。从合成模型的角度来看,本文通过利用隐式3D表示和神经渲染的最新进展,提供了一种新的场景理解方法。在神经辐射场(NERF)取得巨大成功的基础上,我们与Nerf(SS-NERF)介绍了场景杂质合成,不仅能够从新颖的观点中呈现照片真实的RGB图像,还可以使各种准确的场景属性(例如,外观,几何形状和语义)。通过这样做,我们便有助于解决统一框架下的各种场景理解任务,包括语义分割,表面正常估计,重新载体,关键点检测和边缘检测。我们的SS-NERF框架可以成为桥接生成学习和歧视性学习的强大工具,因此有益于研究各种有趣的问题,例如在综合范式中研究任务关系,将知识转移到新任务中,将知识转移到新任务中,以促进下游歧视任务,以促进数据增强的方式作为数据增强和自动创建数据,并为数据创建数据。

Comprehensive 3D scene understanding, both geometrically and semantically, is important for real-world applications such as robot perception. Most of the existing work has focused on developing data-driven discriminative models for scene understanding. This paper provides a new approach to scene understanding, from a synthesis model perspective, by leveraging the recent progress on implicit 3D representation and neural rendering. Building upon the great success of Neural Radiance Fields (NeRFs), we introduce Scene-Property Synthesis with NeRF (SS-NeRF) that is able to not only render photo-realistic RGB images from novel viewpoints, but also render various accurate scene properties (e.g., appearance, geometry, and semantics). By doing so, we facilitate addressing a variety of scene understanding tasks under a unified framework, including semantic segmentation, surface normal estimation, reshading, keypoint detection, and edge detection. Our SS-NeRF framework can be a powerful tool for bridging generative learning and discriminative learning, and thus be beneficial to the investigation of a wide range of interesting problems, such as studying task relationships within a synthesis paradigm, transferring knowledge to novel tasks, facilitating downstream discriminative tasks as ways of data augmentation, and serving as auto-labeller for data creation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源