利用合成数据与课程学习,深入探索强化学习在代码生成中的规模化应用 / A Deep Dive into Scaling RL for Code Generation with Synthetic Data and Curricula
1️⃣ 一句话总结
这篇论文提出了一种通过多轮交互生成结构化合成数据并设计难度课程的方法,有效提升了强化学习训练大型语言模型在代码生成等任务上的性能和泛化能力。
Reinforcement learning (RL) has emerged as a powerful paradigm for improving large language models beyond supervised fine-tuning, yet sustaining performance gains at scale remains an open challenge, as data diversity and structure, rather than volume alone, become the limiting factor. We address this by introducing a scalable multi-turn synthetic data generation pipeline in which a teacher model iteratively refines problems based on in-context student performance summaries, producing structured difficulty progressions without any teacher fine-tuning. Compared to single-turn generation, this multi-turn approach substantially improves the yield of valid synthetic problems and naturally produces stepping stones, i.e. easier and harder variants of the same core task, that support curriculum-based training. We systematically study how task difficulty, curriculum scheduling, and environment diversity interact during RL training across the Llama3.1-8B Instruct and Qwen3-8B Base model families, with additional scaling experiments on Qwen2.5-32B. Our results show that synthetic augmentation consistently improves in-domain code and in most cases out-of-domain math performance, and we provide empirical insights into how curriculum design and data diversity jointly shape RL training dynamics.
利用合成数据与课程学习,深入探索强化学习在代码生成中的规模化应用 / A Deep Dive into Scaling RL for Code Generation with Synthetic Data and Curricula
这篇论文提出了一种通过多轮交互生成结构化合成数据并设计难度课程的方法,有效提升了强化学习训练大型语言模型在代码生成等任务上的性能和泛化能力。
源自 arXiv: 2603.24202