HeRo-Q:一种通过海森矩阵条件化实现稳定低比特量化的通用框架 / HeRo-Q: A General Framework for Stable Low Bit Quantization via Hessian Conditioning
1️⃣ 一句话总结
这篇论文提出了一种名为HeRo-Q的新方法,它通过巧妙地调整模型参数的‘方向’来平滑模型的‘学习地形’,从而让大模型在极低精度(如3比特)下也能稳定工作,避免性能崩溃,且无需改变模型结构或增加太多计算负担。
Post Training Quantization (PTQ), a mainstream model compression technique, often leads to the paradoxical 'low error, high loss' phenomenon because it focuses solely on minimizing quantization error. The root cause lies in the Hessian matrix of the LLM loss landscape: a few high curvature directions are extremely sensitive to perturbations. To address this, we propose the Hessian Robust Quantization (HeRo Q) algorithm, which applies a lightweight, learnable rotation-compression matrix to the weight space prior to quantization. This joint framework reshapes the loss landscape by reducing the largest Hessian eigenvalue and reducing its max eigenvalue, thereby significantly enhancing robustness to quantization noise. HeRo-Q requires no architectural modifications, incurs negligible computational overhead, and integrates seamlessly into existing PTQ pipelines. Experiments on Llama and Qwen models show that HeRo Q consistently outperforms state of the art methods including GPTQ, AWQ, and SpinQuant not only achieving superior performance under standard W4A8 settings, but also excelling in the highly challenging W3A16 ultra low bit regime, where it boosts GSM8K accuracy on Llama3 8B to 70.15\% and effectively avoids the logical collapse commonly seen in aggressive quantization.
HeRo-Q:一种通过海森矩阵条件化实现稳定低比特量化的通用框架 / HeRo-Q: A General Framework for Stable Low Bit Quantization via Hessian Conditioning
这篇论文提出了一种名为HeRo-Q的新方法,它通过巧妙地调整模型参数的‘方向’来平滑模型的‘学习地形’,从而让大模型在极低精度(如3比特)下也能稳定工作,避免性能崩溃,且无需改变模型结构或增加太多计算负担。
源自 arXiv: 2601.21626