令牌稀疏注意力:通过交错令牌选择实现高效的长上下文推理 / Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection
1️⃣ 一句话总结
这篇论文提出了一种名为‘令牌稀疏注意力’的新方法,它通过动态、轻量地筛选出每个注意力头中重要的令牌来压缩计算,从而在保持模型准确率的同时,显著提升了处理超长文本时的推理速度。
The quadratic complexity of attention remains the central bottleneck in long-context inference for large language models. Prior acceleration methods either sparsify the attention map with structured patterns or permanently evict tokens at specific layers, which can retain irrelevant tokens or rely on irreversible early decisions despite the layer-/head-wise dynamics of token importance. In this paper, we propose Token Sparse Attention, a lightweight and dynamic token-level sparsification mechanism that compresses per-head $Q$, $K$, $V$ to a reduced token set during attention and then decompresses the output back to the original sequence, enabling token information to be reconsidered in subsequent layers. Furthermore, Token Sparse Attention exposes a new design point at the intersection of token selection and sparse attention. Our approach is fully compatible with dense attention implementations, including Flash Attention, and can be seamlessly composed with existing sparse attention kernels. Experimental results show that Token Sparse Attention consistently improves accuracy-latency trade-off, achieving up to $\times$3.23 attention speedup at 128K context with less than 1% accuracy degradation. These results demonstrate that dynamic and interleaved token-level sparsification is a complementary and effective strategy for scalable long-context inference.
令牌稀疏注意力:通过交错令牌选择实现高效的长上下文推理 / Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection
这篇论文提出了一种名为‘令牌稀疏注意力’的新方法,它通过动态、轻量地筛选出每个注意力头中重要的令牌来压缩计算,从而在保持模型准确率的同时,显著提升了处理超长文本时的推理速度。
源自 arXiv: 2602.03216