菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - Mask Is What DLLM Needs: A Masked Data Training Paradigm for Diffusion LLMs

Discrete diffusion models offer global context awareness and flexible parallel generation. However, uniform random noise schedulers in standard DLLM training overlook the highly non-uniform information density inherent in real-world sequences. This wastes optimization resources on low-density structural glues while leaving high-density logical pivot points severely under-optimized. To address this, we propose an Information Density Driven Smart Noise Scheduler. By extracting information-dense hubs and applying Complementary Priority Masking, our method decouples a single training instance into mutually reinforcing reasoning and syntax samples, forcing the model to master both logical deduction and foundational sequence structure. Experiments demonstrate that our approach improves average accuracy by ~4\% across four Code and Math reasoning benchmarks, significantly outperforming uniform baselines. Mechanistic analyses further reveal that probabilistic priority masking effectively mitigates contextual collapse during block diffusion training. Overall, this density-aware strategy efficiently unlocks the reasoning potential of diffusion language models at minimal annotation cost, emerging as a promising new masked data training paradigm for Diffusion LLMs. Our processed dataset can be found at this https URL.

顶级标签: llm model training natural language processing
详细标签: diffusion language models noise scheduling masked training reasoning code generation 或 搜索:

DLLM需要的是掩码:一种用于扩散大语言模型的掩码数据训练范式 / Mask Is What DLLM Needs: A Masked Data Training Paradigm for Diffusion LLMs


1️⃣ 一句话总结

这篇论文提出了一种根据信息密度来智能调度训练噪声的新方法,通过优先掩码关键信息,让扩散语言模型同时学好逻辑推理和语法结构,从而在代码和数学推理任务上显著提升性能。

源自 arXiv: 2603.15803