📄
Abstract - The Flexibility Trap: Why Arbitrary Order Limits Reasoning Potential in Diffusion Language Models
Diffusion Large Language Models (dLLMs) break the rigid left-to-right constraint of traditional LLMs, enabling token generation in arbitrary orders. Intuitively, this flexibility implies a solution space that strictly supersets the fixed autoregressive trajectory, theoretically unlocking superior reasoning potential for general tasks like mathematics and coding. Consequently, numerous works have leveraged reinforcement learning (RL) to elicit the reasoning capability of dLLMs. In this paper, we reveal a counter-intuitive reality: arbitrary order generation, in its current form, narrows rather than expands the reasoning boundary of dLLMs. We find that dLLMs tend to exploit this order flexibility to bypass high-uncertainty tokens that are crucial for exploration, leading to a premature collapse of the solution space. This observation challenges the premise of existing RL approaches for dLLMs, where considerable complexities, such as handling combinatorial trajectories and intractable likelihoods, are often devoted to preserving this flexibility. We demonstrate that effective reasoning is better elicited by intentionally forgoing arbitrary order and applying standard Group Relative Policy Optimization (GRPO) instead. Our approach, JustGRPO, is minimalist yet surprisingly effective (e.g., 89.1% accuracy on GSM8K) while fully retaining the parallel decoding ability of dLLMs. Project page: this https URL
灵活性陷阱:为何任意顺序生成反而限制了扩散语言模型的推理潜力 /
The Flexibility Trap: Why Arbitrary Order Limits Reasoning Potential in Diffusion Language Models
1️⃣ 一句话总结
这篇论文发现,尽管扩散大语言模型允许以任意顺序生成文本,理论上提供了更大的探索空间,但实际上这种灵活性会导致模型回避生成关键但不确定的词汇,反而限制了其解决复杂推理任务(如数学和编程)的能力;研究提出,放弃这种任意顺序的灵活性,采用一种更简单的优化方法,反而能显著提升模型性能。