菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-23
📄 Abstract - Gumbel Distillation for Parallel Text Generation

The slow, sequential nature of autoregressive (AR) language models has driven the adoption of parallel decoding methods. However, these non-AR models often sacrifice generation quality as they struggle to model the complex joint distribution of token sequences. To narrow this performance gap, we introduce Gumbel Distillation, a novel distillation technique that enables parallel decoders to learn this distribution effectively. Our method leverages the Gumbel-Max trick to create a deterministic mapping from a latent Gumbel noise space to the output tokens of a high-performing AR teacher. As a model-agnostic technique, Gumbel Distillation seamlessly integrates with diverse parallel decoding architectures, including MDLM and BD3-LM. Experiments on LM1B and OpenWebText show that Gumbel Distillation substantially improves the generation quality of parallel language models, achieving a 30.0% improvement in MAUVE score and 10.5% in generative perplexity over MDLM trained on OpenWebText dataset. Code available at this https URL.

顶级标签: natural language processing model training machine learning
详细标签: parallel decoding knowledge distillation non-autoregressive generation gumbel-max trick language modeling 或 搜索:

用于并行文本生成的Gumbel蒸馏方法 / Gumbel Distillation for Parallel Text Generation


1️⃣ 一句话总结

这篇论文提出了一种名为Gumbel蒸馏的新技术,它通过一种确定性的映射方法,让能够并行解码的模型从高质量的串行生成模型中学习,从而在保持快速生成速度的同时,显著提升了生成文本的质量。

源自 arXiv: 2603.22216