菜单

🤖 系统
📄 Abstract - WUSH: Near-Optimal Adaptive Transforms for LLM Quantization

Quantization to low bitwidth is a standard approach for deploying large language models, however, a few extreme weights and activations stretch the dynamic range and reduce the effective resolution of the quantizer. A common mitigation approach is to apply some fixed orthogonal transforms, such as Hadamard matrices, before quantization, which typically reduces the dynamic range. Yet, these transforms ignore the statistics of the data, and their optimality is currently not understood. In this work, we derive, for the first time, closed-form optimal linear blockwise transforms for joint weight-activation quantization using standard data-free quantizers for common numerical formats. Specifically, we provide derivations of the optimal adaptive (data-aware) transforms for round-to-nearest (RTN), AbsMax-scaled block quantizers for both integer and floating-point formats. The resulting construction, which we call WUSH, combines a Hadamard backbone with a data-dependent component based on second-order moments, yielding a non-orthogonal transform that is provably optimal under mild assumptions and remains structured for efficient implementation. Preliminary experimental results show that our approach consistently improves upon the Hadamard transform for common formats.

顶级标签: llm model training systems
详细标签: quantization adaptive transforms low bitwidth dynamic range hadamard 或 搜索:

WUSH:面向大语言模型量化的近乎最优自适应变换 / WUSH: Near-Optimal Adaptive Transforms for LLM Quantization


1️⃣ 一句话总结

这篇论文提出了一种名为WUSH的新型自适应变换方法,它通过结合哈达玛变换和数据统计信息,为降低大语言模型量化过程中的动态范围提供了理论最优且易于实现的解决方案,从而有效提升了量化模型的性能。


📄 打开原文 PDF