面向扩散语言模型的汇点感知剪枝 / Sink-Aware Pruning for Diffusion Language Models
1️⃣ 一句话总结
这篇论文发现扩散语言模型中的注意力汇点并不稳定,并据此提出了一种能自动识别并剪除这些不稳定汇点的新方法,从而在不重新训练模型的情况下,显著提升了模型推理效率与性能的平衡。
Diffusion Language Models (DLMs) incur high inference cost due to iterative denoising, motivating efficient pruning. Existing pruning heuristics largely inherited from autoregressive (AR) LLMs, typically preserve attention sink tokens because AR sinks serve as stable global anchors. We show that this assumption does not hold for DLMs: the attention-sink position exhibits substantially higher variance over the full generation trajectory (measured by how the dominant sink locations shift across timesteps), indicating that sinks are often transient and less structurally essential than in AR models. Based on this observation, we propose ${\bf \texttt{Sink-Aware Pruning}}$, which automatically identifies and prunes unstable sinks in DLMs (prior studies usually keep sinks for AR LLMs). Without retraining, our method achieves a better quality-efficiency trade-off and outperforms strong prior pruning baselines under matched compute. Our code is available at this https URL.
面向扩散语言模型的汇点感知剪枝 / Sink-Aware Pruning for Diffusion Language Models
这篇论文发现扩散语言模型中的注意力汇点并不稳定,并据此提出了一种能自动识别并剪除这些不稳定汇点的新方法,从而在不重新训练模型的情况下,显著提升了模型推理效率与性能的平衡。
源自 arXiv: 2602.17664