论文标题

自我对准的空间特征提取网络用于无人机重新识别

Self-aligned Spatial Feature Extraction Network for UAV Vehicle Re-identification

论文作者

Yao, Aihuan, Qi, Jiahao, Zhong, Ping

论文摘要

与使用固定监视摄像机收集的数据集执行的现有车辆重新识别(REID)任务相比,无人驾驶飞机(UAV)的车辆REID仍未探索,并且可能更具挑战性。从无人机的角度来看,具有相同颜色和类型的车辆表现出极其相似的外观,因此必须采取精细颗粒特性。最近的作品倾向于通过区域特征和组件特征提取区分信息。前者需要对齐输入图像,后者需要详细的注释,这两种注释都难以在无人机应用中满足。为了提取有效的细粒特征并避免乏味的注释工作,这封信开发了一个由三个分支组成的无监督的自我对准网络。该网络引入了一个自我对齐模块,以将带有变化的输入图像转换为统一取向,该取向是在具有空间特征设计的三重损耗函数的约束下实现的。在此基础上,通过垂直和水平分割方法获得的空间特征,并集成了整体特征,以提高嵌入式空间中的表示能力。在UAV-VEID数据集上进行了广泛的实验,与最近的REID作品相比,我们的方法达到了最佳性能。

Compared with existing vehicle re-identification (ReID) tasks conducted with datasets collected by fixed surveillance cameras, vehicle ReID for unmanned aerial vehicle (UAV) is still under-explored and could be more challenging. Vehicles with the same color and type show extremely similar appearance from the UAV's perspective so that mining fine-grained characteristics becomes necessary. Recent works tend to extract distinguishing information by regional features and component features. The former requires input images to be aligned and the latter entails detailed annotations, both of which are difficult to meet in UAV application. In order to extract efficient fine-grained features and avoid tedious annotating work, this letter develops an unsupervised self-aligned network consisting of three branches. The network introduced a self-alignment module to convert the input images with variable orientations to a uniform orientation, which is implemented under the constraint of triple loss function designed with spatial features. On this basis, spatial features, obtained by vertical and horizontal segmentation methods, and global features are integrated to improve the representation ability in embedded space. Extensive experiments are conducted on UAV-VeID dataset, and our method achieves the best performance compared with recent ReID works.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源