菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-02
📄 Abstract - AA-SVD : Anchored and Adaptive SVD for Large Language Model Compression

We introduce a fast low-rank factorization-based framework for compressing large language models that enables rapid compression of billion-parameter models without retraining. Unlike existing factorization-based approaches that optimize only on the original inputs, ignoring distribution shifts from upstream compression and thus propagating errors forward, or those that rely only on shifted inputs and risk drifting away from the original outputs, our approach accounts for both. Beyond individual layer compression, we further refine each transformer block end-to-end, minimizing block-level output distortion and allowing compressed layers to jointly compensate for accumulated errors. By anchoring each compressed layer to the original outputs while explicitly modeling input distribution shifts, our method finds a low-rank approximation that maintains functional equivalence with the original model. Experiments on large language models show that our method consistently outperforms existing SVD-based baselines across compression ratios, with the advantage becoming increasingly pronounced at aggressive compression budgets, where competing methods degrade substantially or collapse entirely, offering a practical solution for efficient, large-scale model deployment.

顶级标签: llm model training systems
详细标签: model compression low-rank factorization svd parameter efficiency transformer optimization 或 搜索:

AA-SVD:用于大语言模型压缩的锚定自适应奇异值分解方法 / AA-SVD : Anchored and Adaptive SVD for Large Language Model Compression


1️⃣ 一句话总结

这篇论文提出了一种新的快速压缩大语言模型的方法,它通过同时考虑原始模型输出和压缩过程中的数据分布变化,实现了无需重新训练就能高效压缩数十亿参数模型,并且在高压縮比下性能显著优于现有方法。

源自 arXiv: 2604.02119