论预训练、中期训练与强化学习在推理语言模型中的相互作用 / On the Interplay of Pre-Training, Mid-Training, and RL on Reasoning Language Models
1️⃣ 一句话总结
这项研究通过一个完全受控的实验框架发现,强化学习能否真正提升语言模型的推理能力,关键取决于预训练是否留有提升空间、训练数据是否针对模型能力的边界,并且揭示了中期训练在提升效率方面比单纯强化学习更有效。
Recent reinforcement learning (RL) techniques have yielded impressive reasoning improvements in language models, yet it remains unclear whether post-training truly extends a model's reasoning ability beyond what it acquires during pre-training. A central challenge is the lack of control in modern training pipelines: large-scale pre-training corpora are opaque, mid-training is often underexamined, and RL objectives interact with unknown prior knowledge in complex ways. To resolve this ambiguity, we develop a fully controlled experimental framework that isolates the causal contributions of pre-training, mid-training, and RL-based post-training. Our approach employs synthetic reasoning tasks with explicit atomic operations, parseable step-by-step reasoning traces, and systematic manipulation of training distributions. We evaluate models along two axes: extrapolative generalization to more complex compositions and contextual generalization across surface contexts. Using this framework, we reconcile competing views on RL's effectiveness. We show that: 1) RL produces true capability gains (pass@128) only when pre-training leaves sufficient headroom and when RL data target the model's edge of competence, tasks at the boundary that are difficult but not yet out of reach. 2) Contextual generalization requires minimal yet sufficient pre-training exposure, after which RL can reliably transfer. 3) Mid-training significantly enhances performance under fixed compute compared with RL only, demonstrating its central but underexplored role in training pipelines. 4) Process-level rewards reduce reward hacking and improve reasoning fidelity. Together, these results clarify the interplay between pre-training, mid-training, and RL, offering a foundation for understanding and improving reasoning LM training strategies.
论预训练、中期训练与强化学习在推理语言模型中的相互作用 / On the Interplay of Pre-Training, Mid-Training, and RL on Reasoning Language Models
这项研究通过一个完全受控的实验框架发现,强化学习能否真正提升语言模型的推理能力,关键取决于预训练是否留有提升空间、训练数据是否针对模型能力的边界,并且揭示了中期训练在提升效率方面比单纯强化学习更有效。
源自 arXiv: 2512.07783