通过能量守恒下降法实现非凸优化的经典与量子加速 / Classical and Quantum Speedups for Non-Convex Optimization via Energy Conserving Descent
1️⃣ 一句话总结
这篇论文通过分析一种名为‘能量守恒下降’的新优化方法及其量子版本,证明了它们在解决非凸优化问题时,能够有效跳出局部最优解并找到全局最优解,相比传统的梯度下降法能实现指数级的计算加速。
The Energy Conserving Descent (ECD) algorithm was recently proposed (De Luca & Silverstein, 2022) as a global non-convex optimization method. Unlike gradient descent, appropriately configured ECD dynamics escape strict local minima and converge to a global minimum, making it appealing for machine learning optimization. We present the first analytical study of ECD, focusing on the one-dimensional setting for this first installment. We formalize a stochastic ECD dynamics (sECD) with energy-preserving noise, as well as a quantum analog of the ECD Hamiltonian (qECD), providing the foundation for a quantum algorithm through Hamiltonian simulation. For positive double-well objectives, we compute the expected hitting time from a local to the global minimum. We prove that both sECD and qECD yield exponential speedup over respective gradient descent baselines--stochastic gradient descent and its quantization. For objectives with tall barriers, qECD achieves a further speedup over sECD.
通过能量守恒下降法实现非凸优化的经典与量子加速 / Classical and Quantum Speedups for Non-Convex Optimization via Energy Conserving Descent
这篇论文通过分析一种名为‘能量守恒下降’的新优化方法及其量子版本,证明了它们在解决非凸优化问题时,能够有效跳出局部最优解并找到全局最优解,相比传统的梯度下降法能实现指数级的计算加速。
源自 arXiv: 2604.13022