论文标题
利用多层网格图,以稀疏雷达数据的环绕式语义分割
Exploiting Multi-Layer Grid Maps for Surround-View Semantic Segmentation of Sparse LiDAR Data
论文作者
论文摘要
在本文中,我们将激光范围测量值转换为顶级视图映射表示形式,以应对仅激光射击语义分割的任务。自从Semantickitti数据集发布以来,研究人员现在能够根据合理数量的数据研究城市激光雷达序列的语义分割。尽管其他方法建议直接在3D点云上学习,但我们正在利用网格地图框架来提取相关信息并使用多层网格图来表示它们。这种表示使我们能够使用图像域的深入学习架构进行深入学习,以仅使用单个LIDAR扫描的稀疏输入数据来预测密集的语义网格图。我们比较单层和多层方法,并演示了多层网格图输入的好处。由于网格图表示使我们能够预测一个密集的360°语义环境表示,因此我们进一步开发了一种结合多次扫描的语义信息并创建密集地面真相网格的方法。这种方法使我们能够评估和比较模型的性能,不仅是基于检测的网格单元,而且在完整的可见测量范围内进行了比较。
In this paper, we consider the transformation of laser range measurements into a top-view grid map representation to approach the task of LiDAR-only semantic segmentation. Since the recent publication of the SemanticKITTI data set, researchers are now able to study semantic segmentation of urban LiDAR sequences based on a reasonable amount of data. While other approaches propose to directly learn on the 3D point clouds, we are exploiting a grid map framework to extract relevant information and represent them by using multi-layer grid maps. This representation allows us to use well-studied deep learning architectures from the image domain to predict a dense semantic grid map using only the sparse input data of a single LiDAR scan. We compare single-layer and multi-layer approaches and demonstrate the benefit of a multi-layer grid map input. Since the grid map representation allows us to predict a dense, 360° semantic environment representation, we further develop a method to combine the semantic information from multiple scans and create dense ground truth grids. This method allows us to evaluate and compare the performance of our models not only based on grid cells with a detection, but on the full visible measurement range.