论文标题

萨尔德:带衍生物的符号不可知论

SALD: Sign Agnostic Learning with Derivatives

论文作者

Atzmon, Matan, Lipman, Yaron

论文摘要

直接从原始数据中学习3D几何形状,例如点云,三角形汤或无定向的网格仍然是一项具有挑战性的任务,可以为许多下游计算机视觉和图形应用程序提供挑战。 在本文中,我们介绍了萨尔德:一种直接从原始数据中学习形状的隐式神经表示的方法。我们将符号不可知论学习(SAL)推广到包括衍生物:给定未签名的距离函数到输入原始数据,我们主张一种新颖的符号不可静止的回归损失,并结合了无符号距离函数的刻度值和梯度。优化此损失会导致签名的隐式函数解决方案,其零级集集是输入3D数据的高质量和有效的歧管近似值。 SALD背后的动机是将衍生物纳入回归损失会导致样本复杂性较低,从而使得拟合更好。此外,我们证明SAL在2D中拥有最小的长度属性,有利于最少的长度解决方案。更重要的是,我们能够证明该属性仍然适用于萨尔德,即包括衍生物。 我们证明了萨尔德(Sald)在两个具有挑战性的数据集上的形状空间学习功效:塑形网络,其中包含不一致的方向和非模型网格,以及包含原始3D扫描(三角形汤)的D-FAST。在这两个数据集上,我们都提出了最先进的结果。

Learning 3D geometry directly from raw data, such as point clouds, triangle soups, or unoriented meshes is still a challenging task that feeds many downstream computer vision and graphics applications. In this paper, we introduce SALD: a method for learning implicit neural representations of shapes directly from raw data. We generalize sign agnostic learning (SAL) to include derivatives: given an unsigned distance function to the input raw data, we advocate a novel sign agnostic regression loss, incorporating both pointwise values and gradients of the unsigned distance function. Optimizing this loss leads to a signed implicit function solution, the zero level set of which is a high quality and valid manifold approximation to the input 3D data. The motivation behind SALD is that incorporating derivatives in a regression loss leads to a lower sample complexity, and consequently better fitting. In addition, we prove that SAL enjoys a minimal length property in 2D, favoring minimal length solutions. More importantly, we are able to show that this property still holds for SALD, i.e., with derivatives included. We demonstrate the efficacy of SALD for shape space learning on two challenging datasets: ShapeNet that contains inconsistent orientation and non-manifold meshes, and D-Faust that contains raw 3D scans (triangle soups). On both these datasets, we present state-of-the-art results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源