菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-23
📄 Abstract - MemDLM: Memory-Enhanced DLM Training

Diffusion Language Models (DLMs) offer attractive advantages over Auto-Regressive (AR) models, such as full-attention parallel decoding and flexible generation. However, they suffer from a notable train-inference mismatch: DLMs are trained with a static, single-step masked prediction objective, but deployed through a multi-step progressive denoising trajectory. We propose MemDLM (Memory-Enhanced DLM), which narrows this gap by embedding a simulated denoising process into training via Bi-level Optimization. An inner loop updates a set of fast weights, forming a Parametric Memory that captures the local trajectory experience of each sample, while an outer loop updates the base model conditioned on this memory. By offloading memorization pressure from token representations to parameters, MemDLM yields faster convergence and lower training loss. Moreover, the inner loop can be re-enabled at inference time as an adaptation step, yielding additional gains on long-context understanding. We find that, when activated at inference time, this Parametric Memory acts as an emergent in-weight retrieval mechanism, helping MemDLM further reduce token-level attention bottlenecks on challenging Needle-in-a-Haystack retrieval tasks. Code: this https URL.

顶级标签: natural language processing model training llm
详细标签: diffusion language models memory enhancement bi-level optimization inference adaptation retrieval 或 搜索:

MemDLM:内存增强的扩散语言模型训练 / MemDLM: Memory-Enhanced DLM Training


1️⃣ 一句话总结

这篇论文提出了一种名为MemDLM的新方法,通过在训练中引入一个模拟的去噪过程来减少扩散语言模型训练与推理之间的不匹配,从而让模型学得更快、更好,并且在推理时能更好地处理长文本和检索任务。

源自 arXiv: 2603.22241