菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-18
📄 Abstract - Discrete Stochastic Localization for Non-autoregressive Generation

Non-autoregressive (NAR) generation reduces decoding latency by predicting many tokens in parallel, but iterative refinement often suffers from error accumulation and distribution shift under self-generated drafts. Masked diffusion language models (MDLMs) and their remasking samplers (e.g., ReMDM) can be viewed as modern NAR iterative refinement, where generation repeatedly revises a partially observed draft. In this work we show that \emph{training alone} can substantially improve the step-efficiency of MDLM/ReMDM sampling. We propose \textsc{DSL} (Discrete Stochastic Localization), which trains a single SNR-invariant denoiser across a continuum of corruption levels, bridging intermediate draft noise and mask-style endpoint corruption within one Diffusion Transformer. On OpenWebText, \textsc{DSL} fine-tuning yields large MAUVE gains at low step budgets, surpassing the MDLM+ReMDM baseline with \(\sim\)4$\times$ fewer denoiser evaluations, and matches autoregressive quality at high budgets. Analyses show improved self-correction and uncertainty calibration, making remasking markedly more compute-efficient.

顶级标签: natural language processing model training machine learning
详细标签: non-autoregressive generation diffusion language models iterative refinement sampling efficiency denoising 或 搜索:

用于非自回归生成的离散随机定位方法 / Discrete Stochastic Localization for Non-autoregressive Generation


1️⃣ 一句话总结

这项研究提出了一种名为DSL的新训练方法,它通过训练一个统一的降噪模型来显著提升非自回归文本生成模型的效率和生成质量,使其在更少的计算步骤下就能达到与主流自回归模型相媲美的效果。

源自 arXiv: 2602.16169