SliderQuant:面向大语言模型的精确训练后量化 / SliderQuant: Accurate Post-Training Quantization for LLMs
1️⃣ 一句话总结
本文提出了一种名为SliderQuant的新量化框架,它通过分析发现大语言模型不同层对量化的敏感度不同,并设计了一种自适应的滑动窗口量化方法,从而在多种任务和模型上显著降低了量化误差,效果优于现有方法。
In this paper, we address post-training quantization (PTQ) for large language models (LLMs) from an overlooked perspective: given a pre-trained high-precision LLM, the predominant sequential quantization framework treats different layers equally, but this may be not optimal in challenging bit-width settings. We empirically study the quantization impact of different layers on model accuracy, and observe that: (1) shallow/deep layers are usually more sensitive to quantization than intermediate layers; (2) among shallow/deep layers, the most sensitive one is the first/last layer, which exhibits significantly larger quantization error than others. These empirical observations imply that the quantization design for different layers of LLMs is required on multiple levels instead of a single level shared to all layers. Motivated by this, we propose a new PTQ framework termed Sliding-layer Quantization (SliderQuant) that relies on a simple adaptive sliding quantization concept facilitated by few learnable parameters. The base component of SliderQuant is called inter-layer sliding quantization, which incorporates three types of novel sliding window designs tailored for addressing the varying quantization sensitivity of shallow, intermediate and deep layers. The other component is called intra-layer sliding quantization that leverages an incremental strategy to quantize each window. As a result, SliderQuant has a strong ability to reduce quantization errors across layers. Extensive experiments on basic language generation, zero-shot commonsense reasoning and challenging math and code tasks with various LLMs, including Llama/Llama2/Llama3/Qwen2.5 model families, DeepSeek-R1 distilled models and large MoE models, show that our method outperforms existing PTQ methods (including the latest PTQ methods using rotation transformations) for both weight-only quantization and weight-activation quantization.
SliderQuant:面向大语言模型的精确训练后量化 / SliderQuant: Accurate Post-Training Quantization for LLMs
本文提出了一种名为SliderQuant的新量化框架,它通过分析发现大语言模型不同层对量化的敏感度不同,并设计了一种自适应的滑动窗口量化方法,从而在多种任务和模型上显著降低了量化误差,效果优于现有方法。
源自 arXiv: 2603.25284