关于噪声数据与大语言模型预训练损失发散现象的实证研究 / An Empirical Study on Noisy Data and LLM Pretraining Loss Divergence
1️⃣ 一句话总结
这项研究通过大规模实验证实,大语言模型预训练过程中使用的网络数据如果含有过多随机噪声,确实会导致模型训练失败,并且失败的概率与噪声类型、数量以及模型规模密切相关。
Large-scale pretraining datasets drive the success of large language models (LLMs). However, these web-scale corpora inevitably contain large amounts of noisy data due to unregulated web content or randomness inherent in data. Although LLM pretrainers often speculate that such noise contributes to instabilities in large-scale LLM pretraining and, in the worst cases, loss divergence, this phenomenon remains poorly this http URL this work, we present a systematic empirical study of whether noisy data causes LLM pretraining divergences and how it does so. By injecting controlled synthetic uniformly random noise into otherwise clean datasets, we analyze training dynamics across model sizes ranging from 480M to 5.2B parameters. We show that noisy data indeed induces training loss divergence, and that the probability of divergence depends strongly on the noise type, amount of noise, and model scale. We further find that noise-induced divergences exhibit activation patterns distinct from those caused by high learning rates, and we provide diagnostics that differentiate these two failure modes. Together, these results provide a large-scale, controlled characterization of how noisy data affects loss divergence in LLM pretraining.
关于噪声数据与大语言模型预训练损失发散现象的实证研究 / An Empirical Study on Noisy Data and LLM Pretraining Loss Divergence
这项研究通过大规模实验证实,大语言模型预训练过程中使用的网络数据如果含有过多随机噪声,确实会导致模型训练失败,并且失败的概率与噪声类型、数量以及模型规模密切相关。
源自 arXiv: 2602.02400