论文标题

强大调制分类的最大似然蒸馏

Maximum Likelihood Distillation for Robust Modulation Classification

论文作者

Maroto, Javier, Bovet, Gérôme, Frossard, Pascal

论文摘要

深层神经网络尤其是在通信系统和自动调制分类(AMC)中广泛使用。但是,它们非常容易受到小小的对抗扰动的影响,这些扰动经过精心设计以改变网络的决定。在这项工作中,我们以知识蒸馏想法和对抗性培训为基础,以建立更强大的AMC系统。我们首先就模型的准确性和鲁棒性来概述培训数据质量的重要性。然后,我们建议使用最大似然函数,该功能可以解决离线设置中的AMC问题,以生成更好的培训标签。这些标签教导该模型在具有挑战性的条件下是不确定的,这允许提高准确性,以及与对抗性训练结合使用的模型的鲁棒性。有趣的是,我们观察到,在在线设置中的性能转移的增加,在实践中无法使用最大似然函数。总体而言,与直接消除标签噪声相比,这项工作突出了在困难场景中学习不确定的潜力。

Deep Neural Networks are being extensively used in communication systems and Automatic Modulation Classification (AMC) in particular. However, they are very susceptible to small adversarial perturbations that are carefully crafted to change the network decision. In this work, we build on knowledge distillation ideas and adversarial training in order to build more robust AMC systems. We first outline the importance of the quality of the training data in terms of accuracy and robustness of the model. We then propose to use the Maximum Likelihood function, which could solve the AMC problem in offline settings, to generate better training labels. Those labels teach the model to be uncertain in challenging conditions, which permits to increase the accuracy, as well as the robustness of the model when combined with adversarial training. Interestingly, we observe that this increase in performance transfers to online settings, where the Maximum Likelihood function cannot be used in practice. Overall, this work highlights the potential of learning to be uncertain in difficult scenarios, compared to directly removing label noise.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源