论文标题

实践中的强大:对量子机学习的对抗性攻击

Robust in Practice: Adversarial Attacks on Quantum Machine Learning

论文作者

Liao, Haoran, Convy, Ian, Huggins, William J., Whaley, K. Birgitta

论文摘要

观察到最先进的古典神经网络容易受到小型制作的对抗性扰动的影响。量子机学习(QML)模型已经注意到了更严重的脆弱性,将HAAR随机纯状态分类。这源于测量现象的浓度,这是概率采样时度量空间的特性,并且与分类方案无关。为了洞悉量子分类器对现实世界分类任务的对抗性鲁棒性,我们专注于对对抗性的鲁棒性,以分类从高斯潜在空间平稳生成的编码状态的子集。我们表明,这项任务的脆弱性比对Haar-random纯状态进行分类要弱得多。特别地,我们发现Qubits数量的鲁棒性仅在多种多样地降低,与在对HAAR随机纯状态进行分类时呈指数降低的鲁棒性相比,并建议QML模型可用于现实世界中的分类任务。

State-of-the-art classical neural networks are observed to be vulnerable to small crafted adversarial perturbations. A more severe vulnerability has been noted for quantum machine learning (QML) models classifying Haar-random pure states. This stems from the concentration of measure phenomenon, a property of the metric space when sampled probabilistically, and is independent of the classification protocol. In order to provide insights into the adversarial robustness of a quantum classifier on real-world classification tasks, we focus on the adversarial robustness in classifying a subset of encoded states that are smoothly generated from a Gaussian latent space. We show that the vulnerability of this task is considerably weaker than that of classifying Haar-random pure states. In particular, we find only mildly polynomially decreasing robustness in the number of qubits, in contrast to the exponentially decreasing robustness when classifying Haar-random pure states and suggesting that QML models can be useful for real-world classification tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源