菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-10
📄 Abstract - Where-to-Unmask: Ground-Truth-Guided Unmasking Order Learning for Masked Diffusion Language Models

Masked Diffusion Language Models (MDLMs) generate text by iteratively filling masked tokens, requiring two coupled decisions at each step: which positions to unmask (where-to-unmask) and which tokens to place (what-to-unmask). While standard MDLM training directly optimizes token prediction (what-to-unmask), inference-time unmasking orders (where-to-unmask) are typically determined by heuristic confidence measures or trained through reinforcement learning with costly on-policy rollouts. To address this, we introduce Gt-Margin, a position-wise score derived from ground-truth tokens, defined as the probability margin between the correct token and its strongest alternative. Gt-Margin yields an oracle unmasking order that prioritizes easier positions first under each partially masked state. We demonstrate that leveraging this oracle unmasking order significantly enhances final generation quality, particularly on logical reasoning benchmarks. Building on this insight, we train a supervised unmasking planner via learning-to-rank to imitate the oracle ordering from masked contexts. The resulting planner integrates into standard MDLM sampling to select where-to-unmask, improving reasoning accuracy without modifying the token prediction model.

顶级标签: natural language processing model training model evaluation
详细标签: masked diffusion text generation reasoning learning-to-rank unmasking order 或 搜索:

何处去掩码:基于真实标签引导的掩码扩散语言模型解掩顺序学习 / Where-to-Unmask: Ground-Truth-Guided Unmasking Order Learning for Masked Diffusion Language Models


1️⃣ 一句话总结

这篇论文提出了一种新方法,通过利用真实文本信息来指导掩码扩散语言模型在生成文本时决定先填充哪些位置,从而提升模型在逻辑推理等任务上的表现,而无需修改模型本身。

源自 arXiv: 2602.09501