为扩散语言模型学习解掩码策略 / Learning Unmasking Policies for Diffusion Language Models
1️⃣ 一句话总结
这篇论文提出了一种使用强化学习来训练智能策略的方法,以自动决定在扩散语言模型生成文本的每一步中应该同时“揭开”哪些被掩盖的词语,从而在保证生成质量的同时提升效率,避免了传统手动调整启发式方法的不足。
Diffusion (Large) Language Models (dLLMs) now match the downstream performance of their autoregressive counterparts on many tasks, while holding the promise of being more efficient during inference. One particularly successful variant is masked discrete diffusion, in which a buffer filled with special mask tokens is progressively replaced with tokens sampled from the model's vocabulary. Efficiency can be gained by unmasking several tokens in parallel, but doing too many at once risks degrading the generation quality. Thus, one critical design aspect of dLLMs is the sampling procedure that selects, at each step of the diffusion process, which tokens to replace. Indeed, recent work has found that heuristic strategies such as confidence thresholding lead to both higher quality and token throughput compared to random unmasking. However, such heuristics have downsides: they require manual tuning, and we observe that their performance degrades with larger buffer sizes. In this work, we instead propose to train sampling procedures using reinforcement learning. Specifically, we formalize masked diffusion sampling as a Markov decision process in which the dLLM serves as the environment, and propose a lightweight policy architecture based on a single-layer transformer that maps dLLM token confidences to unmasking decisions. Our experiments show that these trained policies match the performance of state-of-the-art heuristics when combined with semi-autoregressive generation, while outperforming them in the full diffusion setting. We also examine the transferability of these policies, finding that they can generalize to new underlying dLLMs and longer sequence lengths. However, we also observe that their performance degrades when applied to out-of-domain data, and that fine-grained tuning of the accuracy-efficiency trade-off can be challenging with our approach.
为扩散语言模型学习解掩码策略 / Learning Unmasking Policies for Diffusion Language Models
这篇论文提出了一种使用强化学习来训练智能策略的方法,以自动决定在扩散语言模型生成文本的每一步中应该同时“揭开”哪些被掩盖的词语,从而在保证生成质量的同时提升效率,避免了传统手动调整启发式方法的不足。
源自 arXiv: 2512.09106