论文标题

增强对抗性的对抗性攻击,以针对深度强化学习

Enhanced Adversarial Strategically-Timed Attacks against Deep Reinforcement Learning

论文作者

Yang, Chao-Han Huck, Qi, Jun, Chen, Pin-Yu, Ouyang, Yi, Hung, I-Te Danny, Lee, Chin-Hui, Ma, Xiaoli

论文摘要

显示最新的基于神经网络的技术,尤其是具有在系统级别中自我适应能力(例如深度增强学习(DRL))的能力的技术,证明具有优化机器人学习系统的许多优势,例如自主导航和持续的机器人手臂控制。来自现实世界情景的扰动。在本文中,我们通过在选定的时间范围上的物理噪声模式来阻断基于DRL的导航系统的基于计时的对抗策略。为了研究基于学习的导航系统的脆弱性,我们提出了两个对抗性代理模型:一个是指在线学习;另一个是基于进化学习。此外,还采用了三个开源机器人学习和导航控制环境来研究对抗时间攻击下的脆弱性。我们的实验结果表明,对抗时序攻击可能会导致显着的性能下降,也表明有必要增强机器人学习系统的鲁棒性。

Recent deep neural networks based techniques, especially those equipped with the ability of self-adaptation in the system level such as deep reinforcement learning (DRL), are shown to possess many advantages of optimizing robot learning systems (e.g., autonomous navigation and continuous robot arm control.) However, the learning-based systems and the associated models may be threatened by the risks of intentionally adaptive (e.g., noisy sensor confusion) and adversarial perturbations from real-world scenarios. In this paper, we introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames. To study the vulnerability of learning-based navigation systems, we propose two adversarial agent models: one refers to online learning; another one is based on evolutionary learning. Besides, three open-source robot learning and navigation control environments are employed to study the vulnerability under adversarial timing attacks. Our experimental results show that the adversarial timing attacks can lead to a significant performance drop, and also suggest the necessity of enhancing the robustness of robot learning systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源