论文标题
GDIP:在不利条件下对象检测的门控可区分图像处理
GDIP: Gated Differentiable Image Processing for Object-Detection in Adverse Conditions
论文作者
论文摘要
在不利天气和照明条件下检测物体对于自动驾驶汽车的安全和连续操作至关重要,并且仍然是一个未解决的问题。我们提出了一个封闭式的可区分图像处理(GDIP)块,该块是一个域形信息网络体系结构,可以插入现有的对象检测网络(例如Yolo)和受过不良条件图像的端到端训练,例如在FOG下捕获的图像和低照明。我们提出的GDIP块学会通过下游对象检测损失直接增强图像。这是通过学习多个图像预处理(IP)技术的参数来实现的,这些参数同时运行,其输出使用通过新颖的门控机制学到的权重结合。我们通过多阶段的指导程序进一步改善了GDIP,以增强渐进图像。最后,我们提出了一个可以用作训练Yolo的常规化合物的GDIP的速度的准确性,这消除了推断期间基于GDIP的图像增强的需求,从而导致较高的吞吐量和可见的现实世界部署。我们通过定量和定性研究对诸如Pascalvoc,现实世界中的Foggy(RTTS)和低光(Exdark)数据集的合成数据集进行了定量和定性研究,从而证明了检测性能的显着改善。
Detecting objects under adverse weather and lighting conditions is crucial for the safe and continuous operation of an autonomous vehicle, and remains an unsolved problem. We present a Gated Differentiable Image Processing (GDIP) block, a domain-agnostic network architecture, which can be plugged into existing object detection networks (e.g., Yolo) and trained end-to-end with adverse condition images such as those captured under fog and low lighting. Our proposed GDIP block learns to enhance images directly through the downstream object detection loss. This is achieved by learning parameters of multiple image pre-processing (IP) techniques that operate concurrently, with their outputs combined using weights learned through a novel gating mechanism. We further improve GDIP through a multi-stage guidance procedure for progressive image enhancement. Finally, trading off accuracy for speed, we propose a variant of GDIP that can be used as a regularizer for training Yolo, which eliminates the need for GDIP-based image enhancement during inference, resulting in higher throughput and plausible real-world deployment. We demonstrate significant improvement in detection performance over several state-of-the-art methods through quantitative and qualitative studies on synthetic datasets such as PascalVOC, and real-world foggy (RTTS) and low-lighting (ExDark) datasets.