论文标题

大规模量子内核机的确定性和随机特征

Deterministic and random features for large-scale quantum kernel machine

论文作者

Nakaji, Kouhei, Tezuka, Hiroyuki, Yamamoto, Naoki

论文摘要

量子机学习(QML)是量子计算机应用程序的矛头。特别是,将量子神经网络(QNN)积极研究为在近期量子计算机和容忍断层量子计算机中起作用的方法。最近的研究表明,使用QNN进行监督的机器学习可以解释为量子内核法(QKM),这表明增强QKM的实用性是构建QML近期应用的关键。但是,QKM也有两​​个严重的问题。一个是,在原始大型希尔伯特空间中定义的具有(基于内部产品的)量子内核的QKM并未概括。也就是说,该模型未能找到看不见的数据模式。另一个是,Qkm的经典计算成本至少与数据数量倍增,因此,QKM与数据大小不可扩展。本文旨在为算法提供脱离这两个问题的算法。也就是说,对于具有概括能力的一类量子内核,我们表明,使用我们提出的确定性和随机特征,可以通过这些量子内核进行QKM。我们的数值实验,使用包括$ O(1,000)\ sim O(10,000)$培训数据的数据集支持我们方法的有效性。

Quantum machine learning (QML) is the spearhead of quantum computer applications. In particular, quantum neural networks (QNN) are actively studied as the method that works both in near-term quantum computers and fault-tolerant quantum computers. Recent studies have shown that supervised machine learning with QNN can be interpreted as the quantum kernel method (QKM), suggesting that enhancing the practicality of the QKM is the key to building near-term applications of QML. However, the QKM is also known to have two severe issues. One is that the QKM with the (inner-product based) quantum kernel defined in the original large Hilbert space does not generalize; namely, the model fails to find patterns of unseen data. The other one is that the classical computational cost of the QKM increases at least quadratically with the number of data, and therefore, QKM is not scalable with data size. This paper aims to provide algorithms free from both of these issues. That is, for a class of quantum kernels with generalization capability, we show that the QKM with those quantum kernels can be made scalable by using our proposed deterministic and random features. Our numerical experiment, using datasets including $O(1,000) \sim O(10,000)$ training data, supports the validity of our method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源