菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-02
📄 Abstract - Characterizing Memorization in Diffusion Language Models: Generalized Extraction and Sampling Effects

Autoregressive language models (ARMs) have been shown to memorize and occasionally reproduce training data verbatim, raising concerns about privacy and copyright liability. Diffusion language models (DLMs) have recently emerged as a competitive alternative, yet their memorization behavior remains largely unexplored due to fundamental differences in generation dynamics. To address this gap, we present a systematic theoretical and empirical characterization of memorization in DLMs. We propose a generalized probabilistic extraction framework that unifies prefix-conditioned decoding and diffusion-based generation under arbitrary masking patterns and stochastic sampling trajectories. Theorem 4.3 establishes a monotonic relationship between sampling resolution and memorization: increasing resolution strictly increases the probability of exact training data extraction, implying that autoregressive decoding corresponds to a limiting case of diffusion-based generation by setting the sampling resolution maximal. Extensive experiments across model scales and sampling strategies validate our theoretical predictions. Under aligned prefix-conditioned evaluations, we further demonstrate that DLMs exhibit substantially lower memorization-based leakage of personally identifiable information (PII) compared to ARMs.

顶级标签: natural language processing model training model evaluation
详细标签: diffusion language models memorization data extraction privacy sampling resolution 或 搜索:

扩散语言模型的记忆特性分析:广义提取与采样效应 / Characterizing Memorization in Diffusion Language Models: Generalized Extraction and Sampling Effects


1️⃣ 一句话总结

这篇论文通过建立一个统一的概率提取框架,首次系统性地揭示了扩散语言模型(DLM)的记忆特性,发现其记忆训练数据的能力会随着采样分辨率的提高而严格增强,并且在同等条件下比自回归模型泄露个人信息的风险更低。

源自 arXiv: 2603.02333