论文标题

通过参数重新映射和架构搜索进行快速神经网络改编

Fast Neural Network Adaptation via Parameter Remapping and Architecture Search

论文作者

Fang, Jiemin, Sun, Yuzhu, Peng, Kangjian, Zhang, Qian, Li, Yuan, Liu, Wenyu, Wang, Xinggang

论文摘要

深度神经网络在许多计算机视觉任务中取得了显着的性能。大多数最先进的(SOTA)语义分割和对象检测方法重复使用用于图像分类为骨干的神经网络体系结构,通常在Imagenet上进行训练。但是,可以通过设计专门用于检测和细分的网络体系结构来实现性能增长,如最近的神经体系结构搜索(NAS)研究以进行检测和细分所示。但是,一个主要的挑战是,搜索空间表示(又称超级网络)或搜索网络的Imagenet预培训会产生巨大的计算成本。在本文中,我们提出了一种快速的神经网络适应(FNA)方法,该方法可以通过参数重新映射技术来适应种子网络(例如,手动设计的骨架高性能的骨架)的结构和参数(例如,高性能手动设计的骨架)成为一个具有不同深度,宽度或内核的网络,从而可以利用NAS来检测/汇总任务,以便更有效地进行检测。在我们的实验中,我们在MobilenetV2上进行FNA,以获取用于分割和检测的新网络,这些网络明显超过了现有的网络,并且由NAS手动设计。 FNA的总计算成本大大低于SOTA细分/检测方法:1737 $ \ times $ $ $ $ $ $ $ $ $ $ $ \ times $ $ \ times $ $ \ times $少于Auto-DeepLab,而7.4 $ \ times $ \ times $均低于detnas。该代码可在https://github.com/jaminfong/fna上找到。

Deep neural networks achieve remarkable performance in many computer vision tasks. Most state-of-the-art (SOTA) semantic segmentation and object detection approaches reuse neural network architectures designed for image classification as the backbone, commonly pre-trained on ImageNet. However, performance gains can be achieved by designing network architectures specifically for detection and segmentation, as shown by recent neural architecture search (NAS) research for detection and segmentation. One major challenge though, is that ImageNet pre-training of the search space representation (a.k.a. super network) or the searched networks incurs huge computational cost. In this paper, we propose a Fast Neural Network Adaptation (FNA) method, which can adapt both the architecture and parameters of a seed network (e.g. a high performing manually designed backbone) to become a network with different depth, width, or kernels via a Parameter Remapping technique, making it possible to utilize NAS for detection/segmentation tasks a lot more efficiently. In our experiments, we conduct FNA on MobileNetV2 to obtain new networks for both segmentation and detection that clearly out-perform existing networks designed both manually and by NAS. The total computation cost of FNA is significantly less than SOTA segmentation/detection NAS approaches: 1737$\times$ less than DPC, 6.8$\times$ less than Auto-DeepLab and 7.4$\times$ less than DetNAS. The code is available at https://github.com/JaminFong/FNA.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源