论文标题

重新思考神经架构搜索中的绩效估算

Rethinking Performance Estimation in Neural Architecture Search

论文作者

Zheng, Xiawu, Ji, Rongrong, Wang, Qiang, Ye, Qixiang, Li, Zhenguo, Tian, Yonghong, Tian, Qi

论文摘要

神经体系结构搜索(NAS)仍然是一个具有挑战性的问题,这归因于性能估计(PE)的必不可少且耗时的组成部分。在本文中,我们在资源约束的制度中提供了一种新颖但系统的重新思考,称为预算的PE(BPE),它精确有效地估算了从建筑领域采样的体系结构的性能。由于搜索最佳BPE非常耗时,因为它需要训练大量网络进行评估,因此我们提出了一种最小的重要性修剪(MIP)方法。给定数据集和一个BPE搜索空间,MIP估计了使用随机森林的超参数的重要性,然后从下一次迭代中修剪了最小一个。通过这种方式,MIP有效地预留了不太重要的超参数,以分配更重要的计算资源,从而实现有效的探索。通过将BPE与各种搜索算法相结合,包括增强学习,进化算法,随机搜索和可区分的架构搜索,我们实现了与SOTA相比,可忽略不计的NAS加速的1,000x NAS加快速度

Neural architecture search (NAS) remains a challenging problem, which is attributed to the indispensable and time-consuming component of performance estimation (PE). In this paper, we provide a novel yet systematic rethinking of PE in a resource constrained regime, termed budgeted PE (BPE), which precisely and effectively estimates the performance of an architecture sampled from an architecture space. Since searching an optimal BPE is extremely time-consuming as it requires to train a large number of networks for evaluation, we propose a Minimum Importance Pruning (MIP) approach. Given a dataset and a BPE search space, MIP estimates the importance of hyper-parameters using random forest and subsequently prunes the minimum one from the next iteration. In this way, MIP effectively prunes less important hyper-parameters to allocate more computational resource on more important ones, thus achieving an effective exploration. By combining BPE with various search algorithms including reinforcement learning, evolution algorithm, random search, and differentiable architecture search, we achieve 1, 000x of NAS speed up with a negligible performance drop comparing to the SOTA

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源