论文标题
对基于ML和硬件的IoT设备指纹和识别的对抗性攻击和防御
Adversarial attacks and defenses on ML- and hardware-based IoT device fingerprinting and identification
论文作者
论文摘要
在过去的几年中,部署的物联网设备的数量无疑爆炸,达到了数十亿美元的规模。但是,随着这一开发的发展,一些新的网络安全问题已经出现。其中一些问题是部署未经授权的设备,恶意代码修改,恶意软件部署或漏洞开发。这一事实激发了基于行为监控的新设备识别机制的要求。此外,由于该领域的进步以及加工能力的增加,这些解决方案最近还具有杠杆机器和深度学习技术。相比之下,攻击者不会停滞不前,并开发了针对上下文修改的对抗性攻击,并将ML/DL评估逃避应用于IoT设备识别解决方案。这项工作探讨了基于硬件行为的单个设备识别的性能,如何受到可能的上下文和ML/DL/DL攻击的影响,以及如何使用防御技术可以提高其弹性。从这个意义上讲,它提出了基于硬件性能行为的LSTM-CNN体系结构,以用于单个设备识别。然后,已经使用从45个运行相同软件的Raspberry Pi设备收集的硬件性能数据集将先前的技术与所提出的体系结构进行了比较。 LSTM-CNN改善了以前的解决方案,可实现A +0.96的平均F1得分和所有设备的最低TPR 0.8。之后,对以前的模型进行了上下文 - 和ML/DL的对抗攻击,以测试其稳健性。基于温度的上下文攻击无法破坏标识。但是,一些ML/DL最先进的逃避攻击是成功的。最后,选择了对抗性训练和模型蒸馏防御技术,以提高逃避攻击的模型弹性,而不会降低其性能。
In the last years, the number of IoT devices deployed has suffered an undoubted explosion, reaching the scale of billions. However, some new cybersecurity issues have appeared together with this development. Some of these issues are the deployment of unauthorized devices, malicious code modification, malware deployment, or vulnerability exploitation. This fact has motivated the requirement for new device identification mechanisms based on behavior monitoring. Besides, these solutions have recently leveraged Machine and Deep Learning techniques due to the advances in this field and the increase in processing capabilities. In contrast, attackers do not stay stalled and have developed adversarial attacks focused on context modification and ML/DL evaluation evasion applied to IoT device identification solutions. This work explores the performance of hardware behavior-based individual device identification, how it is affected by possible context- and ML/DL-focused attacks, and how its resilience can be improved using defense techniques. In this sense, it proposes an LSTM-CNN architecture based on hardware performance behavior for individual device identification. Then, previous techniques have been compared with the proposed architecture using a hardware performance dataset collected from 45 Raspberry Pi devices running identical software. The LSTM-CNN improves previous solutions achieving a +0.96 average F1-Score and 0.8 minimum TPR for all devices. Afterward, context- and ML/DL-focused adversarial attacks were applied against the previous model to test its robustness. A temperature-based context attack was not able to disrupt the identification. However, some ML/DL state-of-the-art evasion attacks were successful. Finally, adversarial training and model distillation defense techniques are selected to improve the model resilience to evasion attacks, without degrading its performance.