菜单

🤖 系统
📄 Abstract - SSA: Sparse Sparse Attention by Aligning Full and Sparse Attention Outputs in Feature Space

The quadratic complexity of full attention limits efficient long-context processing in large language models (LLMs). Sparse attention mitigates this cost by restricting each query to attend to a subset of previous tokens; however, training-free approaches often lead to severe performance degradation. Native sparse-attention methods (e.g., NSA, MoBA) alleviate this issue, yet exhibit a critical paradox: they produce lower attention sparsity than full-attention models, despite aiming to approximate full attention, which may constrain their effectiveness. We attribute this paradox to gradient update deficiency: low-ranked key-value pairs excluded during sparse training receive neither forward contribution nor backward gradients, and thus never learn proper suppression. To overcome this limitation, we propose SSA (Sparse Sparse Attention), a unified training framework that considers both sparse and full attention and enforces bidirectional alignment at every layer. This design preserves gradient flow to all tokens while explicitly encouraging sparse-attention outputs to align with their full-attention counterparts, thereby promoting stronger sparsity. As a result, SSA achieves state-of-the-art performance under both sparse and full attention inference across multiple commonsense benchmarks. Furthermore, SSA enables models to adapt smoothly to varying sparsity budgets; performance improves consistently as more tokens are allowed to attend, supporting flexible compute-performance trade-offs at inference time. Finally, we show that native sparse-attention training surprisingly improves long-context extrapolation by mitigating the over-allocation of attention values in sink areas, with SSA demonstrating the strongest extrapolation capability.

顶级标签: llm model training machine learning
详细标签: sparse attention long-context processing training framework gradient alignment attention optimization 或 搜索:

📄 论文总结

SSA:通过特征空间中对齐完整与稀疏注意力输出的稀疏稀疏注意力 / SSA: Sparse Sparse Attention by Aligning Full and Sparse Attention Outputs in Feature Space


1️⃣ 一句话总结

这篇论文提出了一种名为SSA的新型训练框架,通过让稀疏注意力在每一层都与完整注意力的输出对齐,既保持了梯度更新到所有词元,又显著提升了模型在稀疏计算下的性能,同时支持灵活的计算与性能权衡。


📄 打开原文 PDF