论文标题
直接联合神经建筑搜索
Direct Federated Neural Architecture Search
论文作者
论文摘要
神经架构搜索(NAS)是制造神经网络构建方式的方法的集合。我们将此想法应用于联合学习(FL),其中预定义的神经网络模型接受了客户/设备数据的培训。这种方法并不是最佳的,因为模型开发人员无法观察本地数据,因此无法构建高度准确有效的模型。 NAS很有希望,可以自动搜索非IID数据的全球和个性化模型。大多数NAS方法在计算上都是昂贵的,并且需要在搜索后进行微调,这使其成为两个阶段的复杂过程,并可能进行人工干预。因此,需要端到端NAS,该NA可以在FL方案中通常可见的异质数据和资源分布上运行。在本文中,我们为直接联合NAS提供了一种有效的方法,该方法是硬件不可知论,计算轻量级和一种搜索现成的神经网络模型的一阶段方法。我们的结果表明,在精确度上淘汰了先前的艺术时,资源消耗的数量级降低。这为创建优化和计算高效的联合学习系统的机会打开了机会窗口。
Neural Architecture Search (NAS) is a collection of methods to craft the way neural networks are built. We apply this idea to Federated Learning (FL), wherein predefined neural network models are trained on the client/device data. This approach is not optimal as the model developers can't observe the local data, and hence, are unable to build highly accurate and efficient models. NAS is promising for FL which can search for global and personalized models automatically for the non-IID data. Most NAS methods are computationally expensive and require fine-tuning after the search, making it a two-stage complex process with possible human intervention. Thus there is a need for end-to-end NAS which can run on the heterogeneous data and resource distribution typically seen in the FL scenario. In this paper, we present an effective approach for direct federated NAS which is hardware agnostic, computationally lightweight, and a one-stage method to search for ready-to-deploy neural network models. Our results show an order of magnitude reduction in resource consumption while edging out prior art in accuracy. This opens up a window of opportunity to create optimized and computationally efficient federated learning systems.