📄 论文总结
RLVE:通过自适应可验证环境扩展语言模型的强化学习 / RLVE: Scaling Up Reinforcement Learning for Language Models with Adaptive Verifiable Environments
1️⃣ 一句话总结
这篇论文提出了一种名为RLVE的新方法,通过创建大量能自动调整题目难度的可验证环境来训练语言模型,显著提升了模型在多种推理任务上的表现,且比传统强化学习方法更高效。
We introduce Reinforcement Learning (RL) with Adaptive Verifiable Environments (RLVE), an approach using verifiable environments that procedurally generate problems and provide algorithmically verifiable rewards, to scale up RL for language models (LMs). RLVE enables each verifiable environment to dynamically adapt its problem difficulty distribution to the policy model's capabilities as training progresses. In contrast, static data distributions often lead to vanishing learning signals when problems are either too easy or too hard for the policy. To implement RLVE, we create RLVE-Gym, a large-scale suite of 400 verifiable environments carefully developed through manual environment engineering. Using RLVE-Gym, we show that environment scaling, i.e., expanding the collection of training environments, consistently improves generalizable reasoning capabilities. RLVE with joint training across all 400 environments in RLVE-Gym yields a 3.37% absolute average improvement across six reasoning benchmarks, starting from one of the strongest 1.5B reasoning LMs. By comparison, continuing this LM's original RL training yields only a 0.49% average absolute gain despite using over 3x more compute. We release our code publicly.
RLVE:通过自适应可验证环境扩展语言模型的强化学习 / RLVE: Scaling Up Reinforcement Learning for Language Models with Adaptive Verifiable Environments
这篇论文提出了一种名为RLVE的新方法,通过创建大量能自动调整题目难度的可验证环境来训练语言模型,显著提升了模型在多种推理任务上的表现,且比传统强化学习方法更高效。