菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-10
📄 Abstract - Surgical Repair of Collapsed Attention Heads in ALiBi Transformers

We identify a systematic attention collapse pathology in the BLOOM family of transformer language models, where ALiBi positional encoding causes 31-44% of attention heads to attend almost entirely to the beginning-of-sequence token. The collapse follows a predictable pattern across four model scales (560M to 7.1B parameters), concentrating in head indices where ALiBi's slope schedule imposes the steepest distance penalties. We introduce surgical reinitialization: targeted Q/K/V reinitialization with zeroed output projections and gradient-masked freezing of all non-surgical parameters. Applied to BLOOM-1b7 on a single consumer GPU, the technique recovers 98.7% operational head capacity (242 to 379 of 384 heads) in two passes. A controlled comparison with C4 training data confirms that reinitialization -- not corpus content -- drives recovery, and reveals two distinct post-surgical phenomena: early global functional redistribution that improves the model, and late local degradation that accumulates under noisy training signal. An extended experiment reinitializing mostly-healthy heads alongside collapsed ones produces a model that transiently outperforms stock BLOOM-1b7 by 25% on training perplexity (12.70 vs. 16.99), suggesting that pretrained attention configurations are suboptimal local minima. Code, checkpoints, and diagnostic tools are released as open-source software.

顶级标签: llm model training theory
详细标签: attention collapse alibi parameter reinitialization transformer pathology model repair 或 搜索:

ALiBi Transformer中注意力头塌陷的手术式修复 / Surgical Repair of Collapsed Attention Heads in ALiBi Transformers


1️⃣ 一句话总结

这篇论文发现BLOOM系列大模型中有大量注意力头失效,并提出了一种精准的‘手术式’修复方法,仅需极少计算资源就能恢复模型性能,甚至能超越原始模型,表明预训练模型可能并未达到最优状态。

源自 arXiv: 2603.09616