论文标题

量子神经网络的表达性增强策略

Expressibility-Enhancing Strategies for Quantum Neural Networks

论文作者

Liao, Yalin, Zhan, Junpeng

论文摘要

以参数化量子电路为代表的量子神经网络(QNN)可以在监督学习的范式中训练,以将输入数据映射到预测。许多工作集中在理论上分析QNN的表现力。但是,在几乎所有文献中,QNN的表达能力仅使用简单的单变量函数在数值上验证。我们出人意料地发现,具有强富表达能力的最先进的QNN在近似值即使只是一个简单的正弦函数也可能具有较差的性能。为了填补空白,我们提出了四种QNN的表达性增强策略:正弦友好的嵌入,冗余测量,测量后函数和随机培训数据。我们通过数学分析和/或数值研究分析了这些策略的有效性,包括学习复杂的基于正弦的功能。我们来自比较实验的结果验证了这四种策略可以显着提高QNN在近似复杂的多变量函数中的性能,并减少所需的量子电路深度和量子位。

Quantum neural networks (QNNs), represented by parameterized quantum circuits, can be trained in the paradigm of supervised learning to map input data to predictions. Much work has focused on theoretically analyzing the expressive power of QNNs. However, in almost all literature, QNNs' expressive power is numerically validated using only simple univariate functions. We surprisingly discover that state-of-the-art QNNs with strong expressive power can have poor performance in approximating even just a simple sinusoidal function. To fill the gap, we propose four expressibility-enhancing strategies for QNNs: Sinusoidal-friendly embedding, redundant measurement, post-measurement function, and random training data. We analyze the effectiveness of these strategies via mathematical analysis and/or numerical studies including learning complex sinusoidal-based functions. Our results from comparative experiments validate that the four strategies can significantly increase the QNNs' performance in approximating complex multivariable functions and reduce the quantum circuit depth and qubits required.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源