扩散模型中的误差传播与模型崩溃:一项理论研究 / Error Propagation and Model Collapse in Diffusion Models: A Theoretical Study
1️⃣ 一句话总结
这篇论文从理论上分析了扩散模型在反复使用自身生成的合成数据进行训练时,其生成质量会如何因误差累积而逐渐恶化,并揭示了这种‘模型崩溃’现象在不同训练数据配比下的变化规律。
Machine learning models are increasingly trained or fine-tuned on synthetic data. Recursively training on such data has been observed to significantly degrade performance in a wide range of tasks, often characterized by a progressive drift away from the target distribution. In this work, we theoretically analyze this phenomenon in the setting of score-based diffusion models. For a realistic pipeline where each training round uses a combination of synthetic data and fresh samples from the target distribution, we obtain upper and lower bounds on the accumulated divergence between the generated and target distributions. This allows us to characterize different regimes of drift, depending on the score estimation error and the proportion of fresh data used in each generation. We also provide empirical results on synthetic data and images to illustrate the theory.
扩散模型中的误差传播与模型崩溃:一项理论研究 / Error Propagation and Model Collapse in Diffusion Models: A Theoretical Study
这篇论文从理论上分析了扩散模型在反复使用自身生成的合成数据进行训练时,其生成质量会如何因误差累积而逐渐恶化,并揭示了这种‘模型崩溃’现象在不同训练数据配比下的变化规律。
源自 arXiv: 2602.16601