菜单

🤖 系统
📄 Abstract - LiteAttention: A Temporal Sparse Attention for Diffusion Transformers

Diffusion Transformers, particularly for video generation, achieve remarkable quality but suffer from quadratic attention complexity, leading to prohibitive latency. Existing acceleration methods face a fundamental trade-off: dynamically estimating sparse attention patterns at each denoising step incurs high computational overhead and estimation errors, while static sparsity patterns remain fixed and often suboptimal throughout denoising. We identify a key structural property of diffusion attention, namely, its sparsity patterns exhibit strong temporal coherence across denoising steps. Tiles deemed non-essential at step $t$ typically remain so at step $t+\delta$. Leveraging this observation, we introduce LiteAttention, a method that exploits temporal coherence to enable evolutionary computation skips across the denoising sequence. By marking non-essential tiles early and propagating skip decisions forward, LiteAttention eliminates redundant attention computations without repeated profiling overheads, combining the adaptivity of dynamic methods with the efficiency of static ones. We implement a highly optimized LiteAttention kernel on top of FlashAttention and demonstrate substantial speedups on production video diffusion models, with no degradation in quality. The code and implementation details will be publicly released.

顶级标签: model training video generation aigc
详细标签: diffusion transformers attention mechanism computational efficiency temporal sparsity video generation 或 搜索:

📄 论文总结

LiteAttention:一种用于扩散变换器的时间稀疏注意力机制 / LiteAttention: A Temporal Sparse Attention for Diffusion Transformers


1️⃣ 一句话总结

本文提出了一种名为LiteAttention的高效注意力机制,通过利用扩散过程中注意力模式的时序连贯性来跳过冗余计算,在保持视频生成质量的同时显著降低了计算延迟。


📄 打开原文 PDF