稀疏LaViDa:稀疏多模态离散扩散语言模型 / Sparse-LaViDa: Sparse Multimodal Discrete Diffusion Language Models
1️⃣ 一句话总结
这篇论文提出了一种名为Sparse-LaViDa的新方法,它通过动态移除扩散模型推理过程中不必要的计算,将图像生成和编辑等任务的速度提升了一倍,同时保持了生成质量。
Masked Discrete Diffusion Models (MDMs) have achieved strong performance across a wide range of multimodal tasks, including image understanding, generation, and editing. However, their inference speed remains suboptimal due to the need to repeatedly process redundant masked tokens at every sampling step. In this work, we propose Sparse-LaViDa, a novel modeling framework that dynamically truncates unnecessary masked tokens at each inference step to accelerate MDM sampling. To preserve generation quality, we introduce specialized register tokens that serve as compact representations for the truncated tokens. Furthermore, to ensure consistency between training and inference, we design a specialized attention mask that faithfully matches the truncated sampling procedure during training. Built upon the state-of-the-art unified MDM LaViDa-O, Sparse-LaViDa achieves up to a 2x speedup across diverse tasks including text-to-image generation, image editing, and mathematical reasoning, while maintaining generation quality.
稀疏LaViDa:稀疏多模态离散扩散语言模型 / Sparse-LaViDa: Sparse Multimodal Discrete Diffusion Language Models
这篇论文提出了一种名为Sparse-LaViDa的新方法,它通过动态移除扩散模型推理过程中不必要的计算,将图像生成和编辑等任务的速度提升了一倍,同时保持了生成质量。
源自 arXiv: 2512.14008