菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-18
📄 Abstract - One-step Language Modeling via Continuous Denoising

Language models based on discrete diffusion have attracted widespread interest for their potential to provide faster generation than autoregressive models. In practice, however, they exhibit a sharp degradation of sample quality in the few-step regime, failing to realize this promise. Here we show that language models leveraging flow-based continuous denoising can outperform discrete diffusion in both quality and speed. By revisiting the fundamentals of flows over discrete modalities, we build a flow-based language model (FLM) that performs Euclidean denoising over one-hot token encodings. We show that the model can be trained by predicting the clean data via a cross entropy objective, where we introduce a simple time reparameterization that greatly improves training stability and generation quality. By distilling FLM into its associated flow map, we obtain a distilled flow map language model (FMLM) capable of few-step generation. On the LM1B and OWT language datasets, FLM attains generation quality matching state-of-the-art discrete diffusion models. With FMLM, our approach outperforms recent few-step language models across the board, with one-step generation exceeding their 8-step quality. Our work calls into question the widely held hypothesis that discrete diffusion processes are necessary for generative modeling over discrete modalities, and paves the way toward accelerated flow-based language modeling at scale. Code is available at this https URL.

顶级标签: natural language processing model training machine learning
详细标签: language modeling flow-based models continuous denoising few-step generation non-autoregressive generation 或 搜索:

通过连续去噪实现一步语言建模 / One-step Language Modeling via Continuous Denoising


1️⃣ 一句话总结

这篇论文提出了一种基于连续去噪流的语言模型,它通过预测干净数据来训练,并可以蒸馏成一个能一步生成高质量文本的模型,其性能超越了需要多步生成的现有离散扩散模型。

源自 arXiv: 2602.16813