菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-15
📄 Abstract - A Penalty Approach for Differentiation Through Black-Box Quadratic Programming Solvers

Differentiating through the solution of a quadratic program (QP) is a central problem in differentiable optimization. Most existing approaches differentiate through the Karush--Kuhn--Tucker (KKT) system, but their computational cost and numerical robustness can degrade at scale. To address these limitations, we propose dXPP, a penalty-based differentiation framework that decouples QP solving from differentiation. In the solving step (forward pass), dXPP is solver-agnostic and can leverage any black-box QP solver. In the differentiation step (backward pass), we map the solution to a smooth approximate penalty problem and implicitly differentiate through it, requiring only the solution of a much smaller linear system in the primal variables. This approach bypasses the difficulties inherent in explicit KKT differentiation and significantly improves computational efficiency and robustness. We evaluate dXPP on various tasks, including randomly generated QPs, large-scale sparse projection problems, and a real-world multi-period portfolio optimization task. Empirical results demonstrate that dXPP is competitive with KKT-based differentiation methods and achieves substantial speedups on large-scale problems.

顶级标签: machine learning systems theory
详细标签: differentiable optimization quadratic programming implicit differentiation backpropagation computational efficiency 或 搜索:

一种通过黑盒二次规划求解器进行微分的惩罚方法 / A Penalty Approach for Differentiation Through Black-Box Quadratic Programming Solvers


1️⃣ 一句话总结

本文提出了一种名为dXPP的新方法,它通过将求解和微分两个步骤解耦,并利用惩罚函数来近似原问题,从而高效且鲁棒地计算二次规划问题的梯度,尤其适用于大规模优化任务。

源自 arXiv: 2602.14154