论文标题
Sobolev训练具有近似图像衍生物的隐式神经表示
Sobolev Training for Implicit Neural Representations with Approximated Image Derivatives
论文作者
论文摘要
最近,通过神经网络参数化的隐式神经表示(INRS)已成为一种强大而有前途的工具,可以代表不同种类的信号,因为其连续的,可区分的属性,显示出与经典离散表示的优越性。但是,对INR的神经网络的培训仅利用输入输出对,而目标输出相对于输入的衍生物通常忽略了输入。在本文中,我们为目标输出为图像像素的INR提出了一个训练范式,以编码图像衍生物除了神经网络中的图像值外。具体而言,我们使用有限的差异来近似图像导数。我们展示了如何利用训练范式来解决典型的INRS问题,即图像回归和逆渲染,并证明这种训练范式可以提高INRS的数据效率和概括能力。我们方法的代码可在\ url {https://github.com/megvii-research/sobolev_inrs}中获得。
Recently, Implicit Neural Representations (INRs) parameterized by neural networks have emerged as a powerful and promising tool to represent different kinds of signals due to its continuous, differentiable properties, showing superiorities to classical discretized representations. However, the training of neural networks for INRs only utilizes input-output pairs, and the derivatives of the target output with respect to the input, which can be accessed in some cases, are usually ignored. In this paper, we propose a training paradigm for INRs whose target output is image pixels, to encode image derivatives in addition to image values in the neural network. Specifically, we use finite differences to approximate image derivatives. We show how the training paradigm can be leveraged to solve typical INRs problems, i.e., image regression and inverse rendering, and demonstrate this training paradigm can improve the data-efficiency and generalization capabilities of INRs. The code of our method is available at \url{https://github.com/megvii-research/Sobolev_INRs}.