菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - From Reachability to Learnability: Geometric Design Principles for Quantum Neural Networks

Classical deep networks are effective because depth enables adaptive geometric deformation of data representations. In quantum neural networks (QNNs), however, depth or state reachability alone does not guarantee this feature-learning capability. We study this question in the pure-state setting by viewing encoded data as an embedded manifold in $\mathbb{C}P^{2^n-1}$ and analysing infinitesimal unitary actions through Lie-algebra directions. We introduce Classical-to-Lie-algebra (CLA) maps and the criterion of almost Complete Local Selectivity (aCLS), which combines directional completeness with data-dependent local selectivity. Within this framework, we show that data-independent trainable unitaries are complete but non-selective, i.e. learnable rigid reorientations, whereas pure data encodings are selective but non-tunable, i.e. fixed deformations. Hence, geometric flexibility requires a non-trivial joint dependence on data and trainable weights. We further show that accessing high-dimensional deformations of many-qubit state manifolds requires parametrised entangling directions; fixed entanglers such as CNOT alone do not provide adaptive geometric control. Numerical examples validate that CLS-satisfying data re-uploading models outperform non-tunable schemes while requiring only a quarter of the gate operations. Thus, the resulting picture reframes QNN design from state reachability to controllable geometry of hidden quantum representations.

顶级标签: theory machine learning systems
详细标签: quantum neural networks geometric learning lie algebra representation learning quantum circuits 或 搜索:

从可达性到可学习性:量子神经网络的几何设计原理 / From Reachability to Learnability: Geometric Design Principles for Quantum Neural Networks


1️⃣ 一句话总结

这篇论文提出,量子神经网络要像经典神经网络那样有效学习,关键在于其参数和数据必须共同作用,以灵活地“弯曲”量子态所代表的数据几何形状,而不仅仅是能够生成大量量子态。

源自 arXiv: 2603.03071