论文标题

IDD-3D:印度驾驶数据集,用于3D非结构化道路场景

IDD-3D: Indian Driving Dataset for 3D Unstructured Road Scenes

论文作者

Dokania, Shubham, Hafez, A. H. Abdul, Subramanian, Anbumani, Chandraker, Manmohan, Jawahar, C. V.

论文摘要

自主驾驶和援助系统依靠来自交通和道路场景的注释数据来建模并在复杂的现实世界中学习各种对象关系。可部署的深度学习体系结构的准备和培训要求这些模型适合不同的交通情况并适应不同的情况。目前,现有数据集虽然大规模缺乏这种多样性,并且在地理上偏向主要发展的城市。由于物体类型,密度和位置的差异程度,在印度等几个发展中国家发现的非结构化且复杂的驾驶布局对这些模型提出了挑战。为了促进更好的研究以适应这种情况,我们构建了一个新的数据集IDD-3D,该数据集由来自多个摄像机和LIDAR传感器的多模式数据组成,并在各种交通情况下具有12K注释的驾驶激光镜头框架。我们通过与现有数据集的统计比较讨论了该数据集的需求,并在复杂的布局中突出显示标准3D对象检测和跟踪任务的基准。 https://github.com/shubham1810/idd3d_kit.git上可用的代码和数据

Autonomous driving and assistance systems rely on annotated data from traffic and road scenarios to model and learn the various object relations in complex real-world scenarios. Preparation and training of deploy-able deep learning architectures require the models to be suited to different traffic scenarios and adapt to different situations. Currently, existing datasets, while large-scale, lack such diversities and are geographically biased towards mainly developed cities. An unstructured and complex driving layout found in several developing countries such as India poses a challenge to these models due to the sheer degree of variations in the object types, densities, and locations. To facilitate better research toward accommodating such scenarios, we build a new dataset, IDD-3D, which consists of multi-modal data from multiple cameras and LiDAR sensors with 12k annotated driving LiDAR frames across various traffic scenarios. We discuss the need for this dataset through statistical comparisons with existing datasets and highlight benchmarks on standard 3D object detection and tracking tasks in complex layouts. Code and data available at https://github.com/shubham1810/idd3d_kit.git

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源