菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-30
📄 Abstract - Compressing Transformer Language Models via Matrix Product Operator Decomposition: A Case Study on PicoGPT

Transformer-based language models achieve strong performance across NLP tasks, but their quadratic parameter scaling with hidden dimension makes deployment on resource-constrained hardware expensive. We study Matrix Product Operator (MPO) decomposition as a principled compression method for transformers. MPO factorises weight matrices into chains of low-rank cores, with approximation quality controlled by the bond dimension chi. We replace every this http URL layer in PicoGPT, a GPT-2-style character-level language model with about 1M parameters, with an MPOLinear module parameterised as an MPO chain. Cores are initialised either by TT-SVD from pretrained dense weights or from random initialisation, and trained using standard PyTorch autograd without a custom backward pass. We derive balanced factorisation schemes for the five distinct weight shapes in PicoGPT and evaluate bond dimensions chi in {4, 8, 16, 32} on Tiny Shakespeare. MPO compression achieves up to 13x compression per transformer block at chi = 4. At chi = 16, the model uses 191,872 parameters instead of 1,020,224 while retaining 97.7% of baseline token accuracy (51.6% vs 52.8%). Reconstruction error follows the expected trend and is lower for three-site than two-site factorisations at the same bond dimension. The chi = 8 model gives the best accuracy per parameter, exceeding the dense baseline by 2.7x on this metric. These results show that MPO parameterisation is a practical and theoretically grounded alternative to low-rank methods and unstructured pruning for transformer compression.

顶级标签: llm model training machine learning
详细标签: model compression tensor decomposition transformer parameter efficiency low-rank approximation 或 搜索:

通过矩阵乘积算子分解压缩Transformer语言模型:以PicoGPT为例的研究 / Compressing Transformer Language Models via Matrix Product Operator Decomposition: A Case Study on PicoGPT


1️⃣ 一句话总结

这篇论文提出了一种名为矩阵乘积算子分解的新方法,能有效压缩Transformer语言模型的参数规模,在PicoGPT模型上实现了高达13倍的压缩率,同时保持了与原模型相近的准确率,为在资源有限的设备上部署大语言模型提供了新思路。

源自 arXiv: 2603.28534