通过进度感知置信度调度实现快速解码的扩散语言模型 / Fast-Decoding Diffusion Language Models via Progress-Aware Confidence Schedules
1️⃣ 一句话总结
这篇论文提出了一种名为SchED的无需训练、通用型提前退出算法,它能根据解码进度动态判断模型预测的置信度,从而大幅加速扩散大语言模型的推理速度,在几乎不损失性能的情况下实现数倍的效率提升。
Diffusion large language models (dLLMs) offer a promising alternative to autoregressive models, but their practical utility is severely hampered by slow, iterative sampling. We present SchED, a training-free, model-agnostic early-exit algorithm that aggregates full-span logit margins and halts decoding once a smooth, progress-dependent confidence threshold is met. We evaluated SchED on two dLLM families (Dream and LLaDA), in base and instruction-tuned variants across ten benchmarks spanning downstream tasks including multiple-choice question answering (MCQ), math, long-form QA/summarization, and translation. SchED delivers large, stable accelerations: on instruction-tuned models, it achieves $3.8$-$4.0\times$ speedups while retaining $99.8$-$100\%$ of the baseline score on average. On base models, SchED yields consistent speedup gains with $99.1$-$100\%$ performance retention, with up to $2.34\times$ under more aggressive settings. Using a conservative speed metric that heavily penalizes quality loss (QPS, $\gamma{=}4$), we show that SchED is robust and clearly outperforms prior confidence-based early-exit methods, which break down on long-form generation. An entropy analysis of the model's token predictions reveals that instruction tuning speeds up the decay of predictive entropy. By turning genuine confidence stabilization into computational savings, SchED makes dLLM decoding substantially more efficient.
通过进度感知置信度调度实现快速解码的扩散语言模型 / Fast-Decoding Diffusion Language Models via Progress-Aware Confidence Schedules
这篇论文提出了一种名为SchED的无需训练、通用型提前退出算法,它能根据解码进度动态判断模型预测的置信度,从而大幅加速扩散大语言模型的推理速度,在几乎不损失性能的情况下实现数倍的效率提升。
源自 arXiv: 2512.02892