论文标题
弹出运动:通过学习形状laplacian的3D感知图像变形
Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape Laplacian
论文作者
论文摘要
我们提出了一个框架,该框架可以在3D空间中存在2D图像中的对象。 3D感知图像操纵的大多数现有方法仅限于(1)仅更改全局场景信息或深度,或(2)操纵特定类别的对象。在本文中,我们提出了一种3D感知的图像变形方法,对形状类别和变形类型的限制最少。尽管我们的框架利用了2d到3D重建,但我们认为,由于易受拓扑错误的脆弱性,重建不足以实现现实变形。因此,我们建议采用一种基于学习的学习方法,以预测表示为点云的3D重建的基础体积的形状。鉴于使用预测的形状laplacian和用户定义的变形手柄(例如,关键点)计算的变形能,我们获得了有界的Biharmonic权重,以模拟基于合理的手柄的图像变形。在实验中,我们介绍了变形2D特征和穿衣服的人类图像的结果。我们还定量地表明,与替代方法相比,我们的方法可以产生更准确的变形权重(即网状重建和点云拉普拉斯方法)。
We propose a framework that can deform an object in a 2D image as it exists in 3D space. Most existing methods for 3D-aware image manipulation are limited to (1) only changing the global scene information or depth, or (2) manipulating an object of specific categories. In this paper, we present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type. While our framework leverages 2D-to-3D reconstruction, we argue that reconstruction is not sufficient for realistic deformations due to the vulnerability to topological errors. Thus, we propose to take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud. Given the deformation energy calculated using the predicted shape Laplacian and user-defined deformation handles (e.g., keypoints), we obtain bounded biharmonic weights to model plausible handle-based image deformation. In the experiments, we present our results of deforming 2D character and clothed human images. We also quantitatively show that our approach can produce more accurate deformation weights compared to alternative methods (i.e., mesh reconstruction and point cloud Laplacian methods).