论文标题

数据和知识的鲁棒性测试在网络物理系统中驱动异常检测

Robustness Testing of Data and Knowledge Driven Anomaly Detection in Cyber-Physical Systems

论文作者

Zhou, Xugui, Kouzel, Maxfield, Alemzadeh, Homa

论文摘要

网络物理系统(CPS)的日益增长的复杂性以及确保安全性和保障的挑战,导致了越来越多的深度学习方法,以准确且可扩展的异常检测。但是,机器学习(ML)模型在预测意外数据时通常会遭受较低的性能,并且容易受到意外或恶意扰动的影响。尽管在图像分类和语音识别等应用中广泛探讨了深度学习模型的鲁棒性测试,但对CPS中ML驱动的安全监控的关注较少。本文介绍了对安全 - 关键CP中基于ML基于ML的异常检测方法的鲁棒性的初步结果,该方法使用基于高斯的噪声模型和快速梯度符号方法(FGSM)产生了两种类型的意外和恶意输入扰动。我们检验了将域知识(例如,在不安全系统行为上)与ML模型整合的假设是否可以提高异常检测的鲁棒性,而无需牺牲准确性和透明度。对糖尿病管理的两个人工胰腺系统(AP)的两个案例研究的实验结果表明,经过域知识训练的基于ML的安全监视器可以平均减少鲁棒性误差的54.2%,并保持平均F1得分较高,同时提高透明度。

The growing complexity of Cyber-Physical Systems (CPS) and challenges in ensuring safety and security have led to the increasing use of deep learning methods for accurate and scalable anomaly detection. However, machine learning (ML) models often suffer from low performance in predicting unexpected data and are vulnerable to accidental or malicious perturbations. Although robustness testing of deep learning models has been extensively explored in applications such as image classification and speech recognition, less attention has been paid to ML-driven safety monitoring in CPS. This paper presents the preliminary results on evaluating the robustness of ML-based anomaly detection methods in safety-critical CPS against two types of accidental and malicious input perturbations, generated using a Gaussian-based noise model and the Fast Gradient Sign Method (FGSM). We test the hypothesis of whether integrating the domain knowledge (e.g., on unsafe system behavior) with the ML models can improve the robustness of anomaly detection without sacrificing accuracy and transparency. Experimental results with two case studies of Artificial Pancreas Systems (APS) for diabetes management show that ML-based safety monitors trained with domain knowledge can reduce on average up to 54.2% of robustness error and keep the average F1 scores high while improving transparency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源