论文标题
基于深度学习的多模式传感,用于跟踪和态状小四轮驱动器
Deep Learning based Multi-Modal Sensing for Tracking and State Extraction of Small Quadcopters
论文作者
论文摘要
本文提出了一种基于多传感器的方法,以检测,跟踪和定位四肢飞行器无人机(UAV)。具体而言,开发了一条管道来处理单眼RGB和热视频(从固定平台捕获)以检测和跟踪我们FOV中的无人机。随后,使用2D平面LIDAR来允许将像素数据转换为实际距离测量值,从而使无人机在全局坐标中进行定位。单眼数据是通过基于深度学习的对象检测方法处理的,该方法为无人机计算一个初始边界框。热数据通过阈值和Kalman滤波器方法处理,以检测和跟踪边界框。培训和测试数据是通过组合在运动捕获环境中进行的一组原始实验和公开可用的无人机图像数据来准备的。新管道与现有方法相比有利,并证明了样本实验的有希望的跟踪和定位能力。
This paper proposes a multi-sensor based approach to detect, track, and localize a quadcopter unmanned aerial vehicle (UAV). Specifically, a pipeline is developed to process monocular RGB and thermal video (captured from a fixed platform) to detect and track the UAV in our FoV. Subsequently, a 2D planar lidar is used to allow conversion of pixel data to actual distance measurements, and thereby enable localization of the UAV in global coordinates. The monocular data is processed through a deep learning-based object detection method that computes an initial bounding box for the UAV. The thermal data is processed through a thresholding and Kalman filter approach to detect and track the bounding box. Training and testing data are prepared by combining a set of original experiments conducted in a motion capture environment and publicly available UAV image data. The new pipeline compares favorably to existing methods and demonstrates promising tracking and localization capacity of sample experiments.