论文标题

PointInet:点云框架插值网络

PointINet: Point Cloud Frame Interpolation Network

论文作者

Lu, Fan, Chen, Guang, Qu, Sanqing, Li, Zhijun, Liu, Yinlong, Knoll, Alois

论文摘要

激光点云流通常在时间维度上很少,这受硬件性能的限制。通常,机械激光雷达传感器的框架速率为10至20 Hz,比其他常用的传感器(如相机)低得多。为了克服LIDAR传感器的时间限制,本文研究了一个名为Point Cloud框架插值的新任务。给定两个连续的点云帧,点云框架插值旨在生成它们之间的中间帧。为此,我们提出了一个新颖的框架,即点云框架插值网络(PointInet)。基于提出的方法,较低的帧速率云流可以将其升级为更高的帧速率。我们首先估算两个点云之间的双向3D场景流,然后根据3D场景流将它们扭曲到给定的时间步长。为了融合两个弯曲的框架并产生中间点云,我们提出了一个新颖的基于学习的点融合模块,该模块同时考虑了两个扭曲的点云。我们设计了定量和定性实验,以评估两个大规模室外激光雷达数据集的点云框架插值方法的性能以及广泛的实验证明了所提出的点键的有效性。我们的代码可在https://github.com/ispc-lab/pointinet.git上找到。

LiDAR point cloud streams are usually sparse in time dimension, which is limited by hardware performance. Generally, the frame rates of mechanical LiDAR sensors are 10 to 20 Hz, which is much lower than other commonly used sensors like cameras. To overcome the temporal limitations of LiDAR sensors, a novel task named Point Cloud Frame Interpolation is studied in this paper. Given two consecutive point cloud frames, Point Cloud Frame Interpolation aims to generate intermediate frame(s) between them. To achieve that, we propose a novel framework, namely Point Cloud Frame Interpolation Network (PointINet). Based on the proposed method, the low frame rate point cloud streams can be upsampled to higher frame rates. We start by estimating bi-directional 3D scene flow between the two point clouds and then warp them to the given time step based on the 3D scene flow. To fuse the two warped frames and generate intermediate point cloud(s), we propose a novel learning-based points fusion module, which simultaneously takes two warped point clouds into consideration. We design both quantitative and qualitative experiments to evaluate the performance of the point cloud frame interpolation method and extensive experiments on two large scale outdoor LiDAR datasets demonstrate the effectiveness of the proposed PointINet. Our code is available at https://github.com/ispc-lab/PointINet.git.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源