论文标题
多接收机CNN技术分类器的自动机器学习
Automatic Machine Learning for Multi-Receiver CNN Technology Classifiers
论文作者
论文摘要
卷积神经网络(CNN)是研究最深入的深度学习模型的家族之一,包括调制,技术,检测和识别。在这项工作中,我们专注于基于从多个同步接收器收集的原始I/Q样本的技术分类。作为例子,我们研究了Wi-Fi,LTE-LAA和5G NR-U技术的方案识别,这些技术在5 GHz无人驾驶的国家信息基础设施(U-NII)频段上共存。设计和培训准确的CNN分类器涉及大量的时间和精力,这些时间和精力用于微调模型的建筑设置并确定适当的超参数配置,例如学习率和批处理大小。我们通过将建筑环境定义为超参数来解决前者。我们试图通过通过在近距离的方式中使用超级bland AlgorithM形成超值方式,从而自动优化这些架构参数,以及其他预处理(例如,每个分类器输入中的I/Q样本的数量)和学习超级参数。然后,考虑各种SNR值,将最佳的CNN(OCNN)分类器用于研究OTA和仿真数据集的分类精度。我们表明,构建CNN多通道输入的接收器的数量应定义为可通过超频带优化的预处理超参数。 OTA结果表明,与手动调整的CNN相比,我们的OCNN分类器将分类精度提高了24.58%。我们还研究了每个分类器输入中I/Q样品对I/Q样品在模拟数据集中使用SNR的概括精度的最小值标准化的影响,而当训练集的SNR以外,当I/Q样品归一化时,平均提高了108.05%。
Convolutional Neural Networks (CNNs) are one of the most studied family of deep learning models for signal classification, including modulation, technology, detection, and identification. In this work, we focus on technology classification based on raw I/Q samples collected from multiple synchronized receivers. As an example use case, we study protocol identification of Wi-Fi, LTE-LAA, and 5G NR-U technologies that coexist over the 5 GHz Unlicensed National Information Infrastructure (U-NII) bands. Designing and training accurate CNN classifiers involve significant time and effort that goes into fine-tuning a model's architectural settings and determining the appropriate hyperparameter configurations, such as learning rate and batch size. We tackle the former by defining architectural settings themselves as hyperparameters. We attempt to automatically optimize these architectural parameters, along with other preprocessing (e.g., number of I/Q samples within each classifier input) and learning hyperparameters, by forming a Hyperparameter Optimization (HyperOpt) problem, which we solve in a near-optimal fashion using the Hyperband algorithm. The resulting near-optimal CNN (OCNN) classifier is then used to study classification accuracy for OTA as well as simulations datasets, considering various SNR values. We show that the number of receivers to construct multi-channel inputs for CNNs should be defined as a preprocessing hyperparameter to be optimized via Hyperband. OTA results reveal that our OCNN classifiers improve classification accuracy by 24.58% compared to manually tuned CNNs. We also study the effect of min-max normalization of I/Q samples within each classifier's input on generalization accuracy over simulated datasets with SNRs other than training set's SNR and show an average of 108.05% improvement when I/Q samples are normalized.