具有收敛保证的ADMM过松弛策略学习 / Learning Over-Relaxation Policies for ADMM with Convergence Guarantees
1️⃣ 一句话总结
该论文提出了一种自适应学习ADMM算法中松弛参数的方法,通过在线调整该参数来加速求解结构化凸优化问题(例如模型预测控制中的二次规划),并证明了在参数随时间变化时算法仍能收敛,实验表明该方法在迭代次数和计算时间上均优于经典OSQP求解器。
The Alternating Direction Method of Multipliers (ADMM) is a widely used method for structured convex optimization, and its practical performance depends strongly on the choice of penalty and relaxation parameters. Motivated by settings such as Model Predictive Control (MPC), where one repeatedly solves related optimization problems with fixed structure and changing parameter values, we propose learning online updates of the relaxation parameter to improve performance on problem classes of interest. This choice is computationally attractive in OSQP-like architectures, since adapting relaxation does not trigger the matrix refactorizations associated with penalty updates. We establish convergence guarantees for ADMM with time-varying penalty and relaxation parameters under mild assumptions, and show on benchmark quadratic programs that the resulting learned policies improve both iteration count and wall-clock time over baseline OSQP.
具有收敛保证的ADMM过松弛策略学习 / Learning Over-Relaxation Policies for ADMM with Convergence Guarantees
该论文提出了一种自适应学习ADMM算法中松弛参数的方法,通过在线调整该参数来加速求解结构化凸优化问题(例如模型预测控制中的二次规划),并证明了在参数随时间变化时算法仍能收敛,实验表明该方法在迭代次数和计算时间上均优于经典OSQP求解器。
源自 arXiv: 2604.26932