论文标题

IDM:多尺度单期3D对象检测的实例深度

IDMS: Instance Depth for Multi-scale Monocular 3D Object Detection

论文作者

Hu, Chao, Zhu, Liqiang, Qiu, Weibing, Wu, Weijie

论文摘要

由于缺乏图像的深度信息和单眼3D对象检测的检测准确性差,因此我们提出了多尺度单眼3D对象检测方法的实例深度。首先,为了增强模型的不同规模目标处理能力,设计了基于扩张卷积的多尺度感知模块,并且考虑到不同尺度的特征映射之间的不一致性,从空间和通道方向重新构成了包含多尺度信息的深度特征。首先,我们设计了一个基于扩张卷积的多尺度感知模块,以增强模型的处理能力。考虑到不同尺度的特征图之间的不一致性,从空间和通道方向重新预订了包含多尺度信息的深度特征。其次,为了使模型获得更好的3D感知,本文提议将实例深度信息用作辅助学习任务,以增强3D目标的空间深度特征,并使用稀疏实例深度来监督辅助任务。最后,通过验证KITTI测试集和评估集上提出的算法,实验结果表明,与基线方法相比,所提出的方法在CAR类别中的AP40中提出的方法提高了5.27 \%,从而有效地改善了单面3D对象检测algorithm的检测性能。

Due to the lack of depth information of images and poor detection accuracy in monocular 3D object detection, we proposed the instance depth for multi-scale monocular 3D object detection method. Firstly, to enhance the model's processing ability for different scale targets, a multi-scale perception module based on dilated convolution is designed, and the depth features containing multi-scale information are re-refined from both spatial and channel directions considering the inconsistency between feature maps of different scales. Firstly, we designed a multi-scale perception module based on dilated convolution to enhance the model's processing ability for different scale targets. The depth features containing multi-scale information are re-refined from spatial and channel directions considering the inconsistency between feature maps of different scales. Secondly, so as to make the model obtain better 3D perception, this paper proposed to use the instance depth information as an auxiliary learning task to enhance the spatial depth feature of the 3D target and use the sparse instance depth to supervise the auxiliary task. Finally, by verifying the proposed algorithm on the KITTI test set and evaluation set, the experimental results show that compared with the baseline method, the proposed method improves by 5.27\% in AP40 in the car category, effectively improving the detection performance of the monocular 3D object detection algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源