菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-09
📄 Abstract - Rethinking Residual Errors in Compensation-based LLM Quantization

Methods based on weight compensation, which iteratively apply quantization and weight compensation to minimize the output error, have recently demonstrated remarkable success in quantizing Large Language Models (LLMs). The representative work, GPTQ, introduces several key techniques that make such iterative methods practical for LLMs with billions of parameters. GPTAQ extends this approach by introducing an asymmetric calibration process that aligns the output of each quantized layer with its full-precision counterpart, incorporating a residual error into the weight compensation framework. In this work, we revisit the formulation of the residual error. We identify a sub-optimal calibration objective in existing methods: during the intra-layer calibration process, they align the quantized output with the output from compensated weights, rather than the true output from the original full-precision model. Therefore, we redefine the objective to precisely align the quantized model's output with the original output of the full-precision model at each step. We then reveal that the residual error originates not only from the output difference of the preceding layer but also from the discrepancy between the compensated and original weights within each layer, which we name the 'compensation-aware error'. By inheriting the neuron decomposition technique from GPTAQ, we can efficiently incorporate this compensation-aware error into the weight update process. Extensive experiments on various LLMs and quantization settings demonstrate that our proposed enhancements integrate seamlessly with both GPTQ and GPTAQ, significantly improving their quantization performance. Our code is publicly available at this https URL.

顶级标签: llm model training machine learning
详细标签: quantization weight compensation model compression error analysis large language models 或 搜索:

重新思考基于补偿的大语言模型量化中的残差误差 / Rethinking Residual Errors in Compensation-based LLM Quantization


1️⃣ 一句话总结

这篇论文发现并修正了现有大语言模型量化方法中一个关键的校准目标偏差,通过将量化模型的输出更精确地对齐原始高精度模型,并引入“补偿感知误差”的概念,显著提升了量化性能。

源自 arXiv: 2604.07955