滑动窗口注意力适应 / Sliding Window Attention Adaptation
1️⃣ 一句话总结
这篇论文提出了一套名为SWAA的实用方法,通过组合五种策略,成功地将原本使用全注意力的预训练大语言模型高效地适应到计算成本更低的滑动窗口注意力机制上,从而在保持长文本处理性能的同时显著降低了推理成本。
The self-attention mechanism in Transformer-based Large Language Models (LLMs) scales quadratically with input length, making long-context inference expensive. Sliding window attention (SWA) reduces this cost to linear complexity, but naively enabling complete SWA at inference-time for models pretrained with full attention (FA) causes severe long-context performance degradation due to training-inference mismatch. This makes us wonder: Can FA-pretrained LLMs be well adapted to SWA without pretraining? We investigate this by proposing Sliding Window Attention Adaptation (SWAA), a set of practical recipes that combine five methods for better adaptation: (1) applying SWA only during prefilling; (2) preserving "sink" tokens; (3) interleaving FA/SWA layers; (4) chain-of-thought (CoT); and (5) fine-tuning. Our experiments show that SWA adaptation is feasible while non-trivial: no single method suffices, yet specific synergistic combinations effectively recover the original long-context performance. We further analyze the performance-efficiency trade-offs of different SWAA configurations and provide recommended recipes for diverse scenarios. Our code is available at this https URL
滑动窗口注意力适应 / Sliding Window Attention Adaptation
这篇论文提出了一套名为SWAA的实用方法,通过组合五种策略,成功地将原本使用全注意力的预训练大语言模型高效地适应到计算成本更低的滑动窗口注意力机制上,从而在保持长文本处理性能的同时显著降低了推理成本。
源自 arXiv: 2512.10411