菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - Photonic Quantum-Enhanced Knowledge Distillation

Photonic quantum processors naturally produce intrinsically stochastic measurement outcomes, offering a hardware-native source of structured randomness that can be exploited during machine-learning training. Here we introduce Photonic Quantum-Enhanced Knowledge Distillation (PQKD), a hybrid quantum photonic--classical framework in which a programmable photonic circuit generates a compact conditioning signal that constrains and guides a parameter-efficient student network during distillation from a high-capacity teacher. PQKD replaces fully trainable convolutional kernels with dictionary convolutions: each layer learns only a small set of shared spatial basis filters, while sample-dependent channel-mixing weights are derived from shot-limited photonic features and mapped through a fixed linear transform. Training alternates between standard gradient-based optimisation of the student and sampling-robust, gradient-free updates of photonic parameters, avoiding differentiation through photonic hardware. Across MNIST, Fashion-MNIST and CIFAR-10, PQKD traces a controllable compression--accuracy frontier, remaining close to teacher performance on simpler benchmarks under aggressive convolutional compression. Performance degrades predictably with finite sampling, consistent with shot-noise scaling, and exponential moving-average feature smoothing suppresses high-frequency shot-noise fluctuations, extending the practical operating regime at moderate shot budgets.

顶级标签: machine learning model training systems
详细标签: knowledge distillation quantum photonics hybrid systems parameter efficiency convolutional compression 或 搜索:

光子量子增强知识蒸馏 / Photonic Quantum-Enhanced Knowledge Distillation


1️⃣ 一句话总结

这篇论文提出了一种结合光子量子处理器和经典机器学习的新方法,利用光子硬件产生的结构化随机性来高效压缩大型神经网络模型,在保持较高准确率的同时,显著减少了学生模型的参数量。

源自 arXiv: 2603.14898