组合式强化学习:为大型语言模型的强化学习构建可验证提示 / Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models
1️⃣ 一句话总结
这篇论文提出了一种名为Composition-RL的新方法,通过自动组合多个简单问题来生成新的、更复杂的训练提示,从而更有效地利用有限的可验证数据来提升大型语言模型的推理能力。
Large-scale verifiable prompts underpin the success of Reinforcement Learning with Verifiable Rewards (RLVR), but they contain many uninformative examples and are costly to expand further. Recent studies focus on better exploiting limited training data by prioritizing hard prompts whose rollout pass rate is 0. However, easy prompts with a pass rate of 1 also become increasingly prevalent as training progresses, thereby reducing the effective data size. To mitigate this, we propose Composition-RL, a simple yet useful approach for better utilizing limited verifiable prompts targeting pass-rate-1 prompts. More specifically, Composition-RL automatically composes multiple problems into a new verifiable question and uses these compositional prompts for RL training. Extensive experiments across model sizes from 4B to 30B show that Composition-RL consistently improves reasoning capability over RL trained on the original dataset. Performance can be further boosted with a curriculum variant of Composition-RL that gradually increases compositional depth over training. Additionally, Composition-RL enables more effective cross-domain RL by composing prompts drawn from different domains. Codes, datasets, and models are available at this https URL.
组合式强化学习:为大型语言模型的强化学习构建可验证提示 / Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models
这篇论文提出了一种名为Composition-RL的新方法,通过自动组合多个简单问题来生成新的、更复杂的训练提示,从而更有效地利用有限的可验证数据来提升大型语言模型的推理能力。
源自 arXiv: 2602.12036