论文标题

TABNA:在表格数据集上进行神经体系结构搜索的拒绝采样

TabNAS: Rejection Sampling for Neural Architecture Search on Tabular Datasets

论文作者

Yang, Chengrun, Bender, Gabriel, Liu, Hanxiao, Kindermans, Pieter-Jan, Udell, Madeleine, Lu, Yifeng, Le, Quoc, Huang, Da

论文摘要

给定机器学习问题的最佳神经体系结构取决于许多因素:不仅数据集的复杂性和结构,还取决于资源限制,包括延迟,计算,能源消耗等。图表数据集的神经体系结构搜索(NAS)是重要但不足的问题。为图像搜索空间设计的先前的NAS算法将资源约束直接包含到增强学习(RL)奖励中。但是,对于NAS,在表格数据集上,该协议通常会发现次优体系结构。本文开发了TABNA,这是一种新的,更有效的方法,可以使用由拒绝抽样的想法动机的RL控制器在表格NAS中处理资源约束。 TABNA立即放弃任何违反资源约束的体系结构,而无需培训或从该体系结构学习。 TABNA使用基于蒙特卡洛的校正对RL策略梯度更新,以说明此额外的过滤步骤。几个表格数据集的结果证明了TABNA的优越性,而不是以前的奖励成型方法:它找到了遵守约束的更好模型。

The best neural architecture for a given machine learning problem depends on many factors: not only the complexity and structure of the dataset, but also on resource constraints including latency, compute, energy consumption, etc. Neural architecture search (NAS) for tabular datasets is an important but under-explored problem. Previous NAS algorithms designed for image search spaces incorporate resource constraints directly into the reinforcement learning (RL) rewards. However, for NAS on tabular datasets, this protocol often discovers suboptimal architectures. This paper develops TabNAS, a new and more effective approach to handle resource constraints in tabular NAS using an RL controller motivated by the idea of rejection sampling. TabNAS immediately discards any architecture that violates the resource constraints without training or learning from that architecture. TabNAS uses a Monte-Carlo-based correction to the RL policy gradient update to account for this extra filtering step. Results on several tabular datasets demonstrate the superiority of TabNAS over previous reward-shaping methods: it finds better models that obey the constraints.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源