菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-03
📄 Abstract - MatGPTQ: Accurate and Efficient Post-Training Matryoshka Quantization

Matryoshka Quantization (MatQuant) is a recent quantization approach showing that a single integer-quantized model can be served across multiple precisions, by slicing the most significant bits (MSB) at inference time. This enables a single checkpoint to cover a wide range of memory and latency budgets, but renders quantization much more challenging. In particular, the initial MatQuant relies on expensive quantization-aware training (QAT) variants, rather than fast one-shot post training quantization (PTQ), and lacks open-source and kernel support. We address all of these limitations by introducing Post-Training Matryoshka Quantization (MatGPTQ), a new PTQ pipeline that produces a single parent model jointly optimized for multiple target precisions in one-shot, based on a small calibration set. MatGPTQ casts Matryoshka quantization as a multi-precision objective with bit-slicing and cross-bit error compensation, resulting in an algorithm that produces a multi-bit-width, "sliceable" model in a single pass. We also incorporate a new budget-aware search for heterogeneous per-layer bit-witdhs and provide efficient kernels that implement slicing and mixed-precision execution. Across standard LLMs and benchmarks, MatGPTQ preserves high-bit accuracy while substantially improving performance at low-bit-witdh settings. Overall, we establish a new state of the art for Matryoshka-style post-training quantization and make single-checkpoint, multi-precision deployment open and practical. Code is available at this https URL.

顶级标签: model training llm systems
详细标签: model quantization post-training quantization efficient inference multi-precision models large language models 或 搜索:

MatGPTQ:一种精确高效的训练后嵌套量化方法 / MatGPTQ: Accurate and Efficient Post-Training Matryoshka Quantization


1️⃣ 一句话总结

这篇论文提出了一种名为MatGPTQ的新方法,它只需一次操作就能将一个大型语言模型压缩成单一、可灵活切分的量化版本,使得同一个模型文件可以根据不同的设备性能需求,在运行时动态调整精度,从而在保持高精度模型性能的同时,大幅提升了低精度下的运行效率。

源自 arXiv: 2602.03537