SLaB:用于高效大型语言模型的稀疏-低秩-二进制分解 / SLaB: Sparse-Lowrank-Binary Decomposition for Efficient Large Language Models
1️⃣ 一句话总结
这篇论文提出了一种名为SLaB的新方法,通过将大型语言模型中的线性层权重分解为稀疏、低秩和二进制三个互补部分,无需重新训练就能在高度压缩模型的同时,显著提升其性能表现。
The rapid growth of large language models (LLMs) presents significant deployment challenges due to their massive computational and memory demands. While model compression, such as network pruning, offers potential solutions, most existing methods often fail to maintain good performance at high compression ratios. To address this, we propose SLaB, a novel framework that decomposes each linear layer weight into three complementary components: a sparse matrix, a low-rank matrix, and a binary matrix. SLaB eliminates the need for retraining and leverages activation-aware pruning scores to guide the decomposition process. Experiments on Llama-family models demonstrate that SLaB achieves state-of-the-art performance, reducing perplexity by up to 36% compared to existing methods at 50% compression and improving accuracy by up to 8.98% over the baseline on zero-shot tasks.
SLaB:用于高效大型语言模型的稀疏-低秩-二进制分解 / SLaB: Sparse-Lowrank-Binary Decomposition for Efficient Large Language Models
这篇论文提出了一种名为SLaB的新方法,通过将大型语言模型中的线性层权重分解为稀疏、低秩和二进制三个互补部分,无需重新训练就能在高度压缩模型的同时,显著提升其性能表现。
源自 arXiv: 2604.04493