菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-18
📄 Abstract - Learning When to Attend: Conditional Memory Access for Long-Context LLMs

Language models struggle to generalize beyond pretraining context lengths, limiting long-horizon reasoning and retrieval. Continued pretraining on long-context data can help but is expensive due to the quadratic scaling of Attention. We observe that most tokens do not require (Global) Attention over the entire sequence and can rely on local context. Based on this, we propose L2A (Learning To Attend), a layer that enables conditional (token-wise) long-range memory access by deciding when to invoke global attention. We evaluate L2A on Qwen 2.5 and Qwen 3 models, extending their effective context length from 32K to 128K tokens. L2A matches the performance of standard long-context training to within 3% while skipping Global Attention for $\sim$80% of tokens, outperforming prior baselines. We also design custom Triton kernels to efficiently implement this token-wise conditional Attention on GPUs, achieving up to $\sim$2x improvements in training throughput and time-to-first-token over FlashAttention. Moreover, L2A enables post-training pruning of highly sparse Global Attention layers, reducing KV cache memory by up to 50% with negligible performance loss.

顶级标签: llm model training systems
详细标签: attention mechanism long-context efficient training kv cache conditional computation 或 搜索:

学习何时关注:长上下文大语言模型的条件化记忆访问机制 / Learning When to Attend: Conditional Memory Access for Long-Context LLMs


1️⃣ 一句话总结

这篇论文提出了一种名为L2A的新方法,它让大语言模型能够智能地判断何时需要对长文本进行全局关注,从而在显著降低计算成本的同时,将模型的有效上下文长度从3.2万扩展到12.8万词元,并提升了推理效率。

源自 arXiv: 2603.17484