菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-18
📄 Abstract - DistributedEstimator: Distributed Training of Quantum Neural Networks via Circuit Cutting

Circuit cutting decomposes a large quantum circuit into a collection of smaller subcircuits. The outputs of these subcircuits are then classically reconstructed to recover the original expectation values. While prior work characterises cutting overhead largely in terms of subcircuit counts and sampling complexity, its end-to-end impact on iterative, estimator-driven training pipelines remains insufficiently measured from a systems perspective. In this paper, we propose a cut-aware estimator execution pipeline that treats circuit cutting as a staged distributed workload and instruments each estimator query into partitioning, subexperiment generation, parallel execution, and classical reconstruction phases. Using logged runtime traces and learning outcomes on two binary classification workloads (Iris and MNIST), we quantify cutting overheads, scaling limits, and sensitivity to injected stragglers, and we evaluate whether accuracy and robustness are preserved under matched training budgets. Our measurements show that cutting introduces substantial end-to-end overheads that grow with the number of cuts, and that reconstruction constitutes a dominant fraction of per-query time, bounding achievable speed-up under increased parallelism. Despite these systems costs, test accuracy and robustness are preserved in the measured regimes, with configuration-dependent improvements observed in some cut settings. These results indicate that practical scaling of circuit cutting for learning workloads hinges on reducing and overlapping reconstruction and on scheduling policies that account for barrier-dominated critical paths.

顶级标签: systems model training machine learning
详细标签: quantum neural networks circuit cutting distributed training benchmark systems overhead 或 搜索:

分布式估计器:通过电路切割实现量子神经网络的分布式训练 / DistributedEstimator: Distributed Training of Quantum Neural Networks via Circuit Cutting


1️⃣ 一句话总结

这篇论文提出了一种将大型量子电路切割成多个小电路进行分布式训练的新方法,并通过实验发现,虽然该方法能保持模型精度,但重建小电路结果的计算开销很大,是限制其性能提升的主要瓶颈。

源自 arXiv: 2602.16233