论文标题
COLAB NAS:在Occam的剃须刀之后获得轻巧的特定任务卷积神经网络
Colab NAS: Obtaining lightweight task-specific convolutional neural networks following Occam's razor
论文作者
论文摘要
当目标应用程序是一个自定义和划界的问题时,在大型数据集中应用卷积神经网络(CNN)应用转移学习的当前趋势可能是过度杀伤,并具有足够的数据来从SCRATCH培训网络。另一方面,在从划痕案例和 / /或高端资源中进行定制和较轻的CNN的培训需要专业知识,例如在硬件感知的神经体系结构搜索(HW NAS),从而限制了非汉堡NN开发人员对技术的访问。 因此,我们提出了Colabnas,这是一种负担得起的HW NAS技术,用于生产轻型任务特定的CNN。它的新型无衍生搜索策略受到Occam剃须刀的启发,允许在仅3.1 GPU小时内使用免费在线GPU服务(例如Google Colagoragoration和Kernel),在3.1 GPU小时内获得Visual Wake Word DataSet的最新结果。
The current trend of applying transfer learning from convolutional neural networks (CNNs) trained on large datasets can be an overkill when the target application is a custom and delimited problem, with enough data to train a network from scratch. On the other hand, the training of custom and lighter CNNs requires expertise, in the from-scratch case, and or high-end resources, as in the case of hardware-aware neural architecture search (HW NAS), limiting access to the technology by non-habitual NN developers. For this reason, we present ColabNAS, an affordable HW NAS technique for producing lightweight task-specific CNNs. Its novel derivative-free search strategy, inspired by Occam's razor, allows to obtain state-of-the-art results on the Visual Wake Word dataset, a standard TinyML benchmark, in just 3.1 GPU hours using free online GPU services such as Google Colaboratory and Kaggle Kernel.