菜单

🤖 系统
📄 Abstract - Masks Can Be Distracting: On Context Comprehension in Diffusion Language Models

Masked Diffusion Language Models (MDLMs) have recently emerged as a promising alternative to Autoregressive Language Models (ARLMs), leveraging a denoising objective that, in principle, should enable more uniform context utilisation. In this work, we examine the context comprehension abilities of MDLMs and uncover two key limitations. First, despite their more global training objective and bidirectional attention mechanism, similarly to ARLMS, MDLMs exhibit a strong locality bias: performance is highly sensitive to the position of relevant information within the input, favouring local over distant context. Second, we show that appending a large number of mask tokens--required for generation--can significantly degrade context comprehension. Through systematic ablations, we find that these masks act as distractors, reducing the model's ability to process relevant information. To address this, we introduce a mask-agnostic loss function that encourages predictions to remain invariant to the number of appended masks. Fine-tuning with this objective substantially mitigates the distracting effect of masks, improving robustness of MDLMs. Overall, our findings reveal critical limitations of the current MDLM training paradigm and provide actionable insights for building diffusion-based language models with stronger context comprehension.

顶级标签: natural language processing model training model evaluation
详细标签: diffusion language models context comprehension masked denoising training objective attention bias 或 搜索:

面具可能成为干扰:论扩散语言模型中的上下文理解 / Masks Can Be Distracting: On Context Comprehension in Diffusion Language Models


1️⃣ 一句话总结

这篇论文研究发现,新型的掩码扩散语言模型在理解文本上下文时存在两个主要问题:一是过分关注局部信息而忽略远处内容,二是生成文本所需的额外掩码符号会严重干扰模型对原始信息的处理;作者通过提出一种新的训练方法,有效减少了掩码的干扰,提升了模型的稳健性。


📄 打开原文 PDF