论文标题
量子神经网络体系结构的争夺能力
Scrambling Ability of Quantum Neural Networks Architectures
论文作者
论文摘要
在这封信中,我们提出了一个一般原则,说明如何建立具有高度学习效率的量子神经网络。我们的策略基于从输入状态提取信息到读取量子的等效性,以及从读取量子量量计到输入量子位的信息。我们表征了操作员尺寸增长的量子信息,而HAAR随机对操作员的尺寸进行平均,我们提出了平均操作员的尺寸,以描述给定量子神经网络架构的信息扰乱能力,并认为此数量与该体系结构的学习效率呈正相关。作为示例,我们计算了几个不同体系结构的平均操作员大小,我们还考虑了两个典型的学习任务,这些任务分别是量子问题的回归任务和经典图像上的分类任务。在这两种情况下,我们都会发现,对于具有较大平均操作员大小的体系结构,随着培训时期的增加,损耗函数降低速度更快,或者测试数据集中的预测准确性更快,这意味着更高的学习效率。我们的结果可以推广到机器学习算法的更复杂的量子版本。
In this letter we propose a general principle for how to build up a quantum neural network with high learning efficiency. Our stratagem is based on the equivalence between extracting information from input state to readout qubit and scrambling information from the readout qubit to input qubits. We characterize the quantum information scrambling by operator size growth, and by Haar random averaging over operator sizes, we propose an averaged operator size to describe the information scrambling ability for a given quantum neural network architectures, and argue this quantity is positively correlated with the learning efficiency of this architecture. As examples, we compute the averaged operator size for several different architectures, and we also consider two typical learning tasks, which are a regression task of a quantum problem and a classification task on classical images, respectively. In both cases, we find that, for the architecture with a larger averaged operator size, the loss function decreases faster or the prediction accuracy in the testing dataset increases faster as the training epoch increases, which means higher learning efficiency. Our results can be generalized to more complicated quantum versions of machine learning algorithms.