菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-15
📄 Abstract - ReFusion: A Diffusion Large Language Model with Parallel Autoregressive Decoding

Autoregressive models (ARMs) are hindered by slow sequential inference. While masked diffusion models (MDMs) offer a parallel alternative, they suffer from critical drawbacks: high computational overhead from precluding Key-Value (KV) caching, and incoherent generation arising from learning dependencies over an intractable space of token combinations. To address these limitations, we introduce ReFusion, a novel masked diffusion model that achieves superior performance and efficiency by elevating parallel decoding from the token level to a higher slot level, where each slot is a fixed-length, contiguous sub-sequence. This is achieved through an iterative ``plan-and-infill'' decoding process: a diffusion-based planning step first identifies a set of weakly dependent slots, and an autoregressive infilling step then decodes these selected slots in parallel. The slot-based design simultaneously unlocks full KV cache reuse with a unified causal framework and reduces the learning complexity from the token combination space to a manageable slot-level permutation space. Extensive experiments on seven diverse benchmarks show that ReFusion not only overwhelmingly surpasses prior MDMs with 34% performance gains and an over 18$\times$ speedup on average, but also bridges the performance gap to strong ARMs while maintaining a 2.33$\times$ average speedup.

顶级标签: llm model training natural language processing
详细标签: parallel decoding diffusion models autoregressive models efficiency kv caching 或 搜索:

ReFusion:一种采用并行自回归解码的扩散大语言模型 / ReFusion: A Diffusion Large Language Model with Parallel Autoregressive Decoding


1️⃣ 一句话总结

这篇论文提出了一种名为ReFusion的新模型,它通过将并行解码从单个词元提升到更高级的‘片段’级别,并采用‘规划-填充’的两步解码策略,在保持高质量文本生成的同时,显著提升了生成速度,成功弥合了传统自回归模型与并行扩散模型之间的性能与效率鸿沟。


源自 arXiv: 2512.13586