菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-22
📄 Abstract - Variance Is Not Importance: Structural Analysis of Transformer Compressibility Across Model Scales

We present a systematic empirical study of transformer compression through over 40 experiments on GPT-2 (124M parameters) and Mistral 7B (7.24B parameters). Our analysis covers spectral compression, block-level function replacement, rotation-based quantization, activation geometry, and adaptive early exit. We identify five structural properties relevant to compression. (1) Variance is not importance: high-variance activation directions are approximately 96 percent uncorrelated with predictive directions (measured via CCA), and projecting onto these subspaces preserves over 90 percent of variance while degrading perplexity. (2) Block linearity is conditional: transformer blocks are approximately linear (R^2 ~ 0.95 on GPT-2, 0.93 on Mistral block 31) only under the correct upstream distribution; modifying earlier blocks induces distribution shift that degrades downstream approximations. (3) The reconstruction wall: approaches that factor weights into quantized components amplify errors through cross-terms, making direct quantization strictly superior. (4) Linearity increases with depth: Mistral 7B exhibits a progression from R^2 = 0.17 (block 0) to R^2 = 0.93 (block 31), indicating a division between nonlinear feature construction and linear refinement. (5) Approximately 30 percent of tokens are computationally easy, confirmed via exit heads and KL divergence sensitivity. We demonstrate that single-block linear replacement achieves 34x compression with a 1.71 perplexity increase on the final block of Mistral 7B, while multi-block replacement fails due to residual error accumulation and distribution shift. These findings suggest fundamental limits to static post-training compression and motivate adaptive, per-token computation as a more effective direction.

顶级标签: llm model training model evaluation
详细标签: transformer compression spectral analysis activation geometry per-token adaptivity perplexity analysis 或 搜索:

方差不等于重要性:不同规模Transformer模型可压缩性的结构分析 / Variance Is Not Importance: Structural Analysis of Transformer Compressibility Across Model Scales


1️⃣ 一句话总结

本文通过在GPT-2和Mistral 7B上的大量实验,揭示了Transformer模型中五个关键的结构特性,指出高方差方向并不等于预测相关方向,并证明了静态压缩存在根本性极限,而基于每个token的自适应计算才是更有效的压缩方向。

源自 arXiv: 2604.20682