用于高效扩散变换器的可训练对数线性稀疏注意力机制 / Trainable Log-linear Sparse Attention for Efficient Diffusion Transformers
1️⃣ 一句话总结
这篇论文提出了一种名为LLSA的新型可训练稀疏注意力机制,它通过分层结构将计算复杂度从平方级降低到对数线性级,从而在保持图像生成质量的同时,大幅提升了扩散变换器在处理长序列时的训练和推理效率。
Diffusion Transformers (DiTs) set the state of the art in visual generation, yet their quadratic self-attention cost fundamentally limits scaling to long token sequences. Recent Top-K sparse attention approaches reduce the computation of DiTs by compressing tokens into block-wise representation and selecting a small set of relevant key blocks, but still suffer from (i) quadratic selection cost on compressed tokens and (ii) increasing K required to maintain model quality as sequences grow. We identify that their inefficiency is due to the single-level design, as a single coarse level is insufficient to represent the global structure. In this paper, we introduce Log-linear Sparse Attention (LLSA), a trainable sparse attention mechanism for extremely long token sequences that reduces both selection and attention costs from quadratic to log-linear complexity by utilizing a hierarchical structure. LLSA performs hierarchical Top-K selection, progressively adopting sparse Top-K selection with the indices found at the previous level, and introduces a Hierarchical KV Enrichment mechanism that preserves global context while using fewer tokens of different granularity during attention computation. To support efficient training, we develop a high-performance GPU implementation that uses only sparse indices for both the forward and backward passes, eliminating the need for dense attention masks. We evaluate LLSA on high-resolution pixel-space image generation without using patchification and VAE encoding. LLSA accelerates attention inference by 28.27x and DiT training by 6.09x on 256x256 pixel token sequences, while maintaining generation quality. The results demonstrate that LLSA offers a promising direction for training long-sequence DiTs efficiently. Code is available at: this https URL
用于高效扩散变换器的可训练对数线性稀疏注意力机制 / Trainable Log-linear Sparse Attention for Efficient Diffusion Transformers
这篇论文提出了一种名为LLSA的新型可训练稀疏注意力机制,它通过分层结构将计算复杂度从平方级降低到对数线性级,从而在保持图像生成质量的同时,大幅提升了扩散变换器在处理长序列时的训练和推理效率。
源自 arXiv: 2512.16615