DiffusionVL:将任何自回归模型转化为扩散式视觉语言模型 / DiffusionVL: Translating Any Autoregressive Models into Diffusion Vision Language Models
1️⃣ 一句话总结
这篇论文提出了一种名为DiffusionVL的新方法,能够将现有强大的自回归视觉语言模型轻松转化为性能更优的扩散式模型,不仅显著提升了多项基准测试的得分,还实现了更快的推理速度。
In recent multimodal research, the diffusion paradigm has emerged as a promising alternative to the autoregressive paradigm (AR), owing to its unique decoding advantages. However, due to the capability limitations of the base diffusion language model, the performance of the diffusion vision language model (dVLM) still lags significantly behind that of mainstream models. This leads to a simple yet fundamental question: Is it possible to construct dVLMs based on existing powerful AR models? In response, we propose DiffusionVL, a dVLM family that could be translated from any powerful AR models. Through simple fine-tuning, we successfully adapt AR pre-trained models into the diffusion paradigm. This approach yields two key observations: (1) The paradigm shift from AR-based multimodal models to diffusion is remarkably effective. (2) Direct conversion of an AR language model to a dVLM is also feasible, achieving performance competitive with LLaVA-style visual-instruction-tuning. Further, we introduce a block-decoding design into dVLMs that supports arbitrary-length generation and KV cache reuse, achieving a significant inference speedup. We conduct a large number of experiments. Despite training with less than 5% of the data required by prior methods, DiffusionVL achieves a comprehensive performance improvement-a 34.4% gain on the MMMU-Pro (vision) bench and 37.5% gain on the MME (Cog.) bench-alongside a 2x inference speedup. The model and code are released at this https URL.
DiffusionVL:将任何自回归模型转化为扩散式视觉语言模型 / DiffusionVL: Translating Any Autoregressive Models into Diffusion Vision Language Models
这篇论文提出了一种名为DiffusionVL的新方法,能够将现有强大的自回归视觉语言模型轻松转化为性能更优的扩散式模型,不仅显著提升了多项基准测试的得分,还实现了更快的推理速度。
源自 arXiv: 2512.15713