论文标题

针对弱监督对象检测的全面注意自我鉴定

Comprehensive Attention Self-Distillation for Weakly-Supervised Object Detection

论文作者

Huang, Zeyi, Zou, Yang, Bhagavatula, Vijayakumar, Huang, Dong

论文摘要

弱监督的对象检测(WSOD)已成为仅使用图像级类别标签训练对象检测器的有效工具。但是,如果没有对象级标签,WSOD检测器容易检测到显着对象,群集对象和辨别对象部分上的边界框。此外,图像级类别标签不会在同一图像的不同变换上强制执行一致的对象检测。为了解决上述问题,我们为WSOD提出了一种全面的注意自我验证(CASD)培训方法。为了平衡所有对象实例之间的特征学习,CASD计算从相同图像的多个转换和特征层中汇总的全面关注。为了在对象上执行一致的空间监督,CASD在WSOD网络上进行自我验证,以便通过相同图像的多个变换和特征层同时近似综合关注。 CASD在Pascal VOC 2007/2012和MS-Coco等标准基准上产生新的最先进的WSOD结果。

Weakly Supervised Object Detection (WSOD) has emerged as an effective tool to train object detectors using only the image-level category labels. However, without object-level labels, WSOD detectors are prone to detect bounding boxes on salient objects, clustered objects and discriminative object parts. Moreover, the image-level category labels do not enforce consistent object detection across different transformations of the same images. To address the above issues, we propose a Comprehensive Attention Self-Distillation (CASD) training approach for WSOD. To balance feature learning among all object instances, CASD computes the comprehensive attention aggregated from multiple transformations and feature layers of the same images. To enforce consistent spatial supervision on objects, CASD conducts self-distillation on the WSOD networks, such that the comprehensive attention is approximated simultaneously by multiple transformations and feature layers of the same images. CASD produces new state-of-the-art WSOD results on standard benchmarks such as PASCAL VOC 2007/2012 and MS-COCO.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源