论文标题

基于学习的人分割和使用自动标记的激光雷达序列进行训练的人分割和速度估计

Learning-Based Human Segmentation and Velocity Estimation Using Automatic Labeled LiDAR Sequence for Training

论文作者

Kim, Wonjik, Tanaka, Masayuki, Okutomi, Masatoshi, Sasaki, Yoko

论文摘要

在本文中,我们提出了一个自动标记的序列数据生成管道,用于通过点云进行人体分割和速度估计。考虑到深神经网络的影响,已经提出了最先进的网络架构,以使用光检测和射程(LIDAR)捕获的点云进行人体识别。但是,一个缺点是,旧数据集可能只能覆盖图像域而不提供重要标签信息,并且此限制扰乱了迄今为止的研究进展。因此,我们开发了一个自动标记的顺序数据生成管道,其中我们可以通过像素和人均地面真实分段和像素速度信息来控制任何参数或数据生成环境,以供人类识别。我们的方法使用精确的人类模型,并重现精确的运动来生成现实的人工数据。我们提供了超过7K的视频序列,这些视频序列由提议的管道生成的32帧组成。使用提出的序列发生器,我们确认使用视频域与使用图像域相比,使用视频域时会提高人体分割性能。我们还通过与在不同条件下生成的数据进行比较来评估我们的数据。此外,我们仅利用拟议管道产生的数据来估算LIDAR的行人速度。

In this paper, we propose an automatic labeled sequential data generation pipeline for human segmentation and velocity estimation with point clouds. Considering the impact of deep neural networks, state-of-the-art network architectures have been proposed for human recognition using point clouds captured by Light Detection and Ranging (LiDAR). However, one disadvantage is that legacy datasets may only cover the image domain without providing important label information and this limitation has disturbed the progress of research to date. Therefore, we develop an automatic labeled sequential data generation pipeline, in which we can control any parameter or data generation environment with pixel-wise and per-frame ground truth segmentation and pixel-wise velocity information for human recognition. Our approach uses a precise human model and reproduces a precise motion to generate realistic artificial data. We present more than 7K video sequences which consist of 32 frames generated by the proposed pipeline. With the proposed sequence generator, we confirm that human segmentation performance is improved when using the video domain compared to when using the image domain. We also evaluate our data by comparing with data generated under different conditions. In addition, we estimate pedestrian velocity with LiDAR by only utilizing data generated by the proposed pipeline.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源