菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-24
📄 Abstract - Actor-Curator: Co-adaptive Curriculum Learning via Policy-Improvement Bandits for RL Post-Training

Post-training large foundation models with reinforcement learning typically relies on massive and heterogeneous datasets, making effective curriculum learning both critical and challenging. In this work, we propose ACTOR-CURATOR, a scalable and fully automated curriculum learning framework for reinforcement learning post-training of large language models (LLMs). ACTOR-CURATOR learns a neural curator that dynamically selects training problems from large problem banks by directly optimizing for expected policy performance improvement. We formulate problem selection as a non-stationary stochastic bandit problem, derive a principled loss function based on online stochastic mirror descent, and establish regret guarantees under partial feedback. Empirically, ACTOR-CURATOR consistently outperforms uniform sampling and strong curriculum baselines across a wide range of challenging reasoning benchmarks, demonstrating improved training stability and efficiency. Notably, it achieves relative gains of 28.6% on AIME2024 and 30.5% on ARC-1D over the strongest baseline and up to 80% speedup. These results suggest that ACTOR-CURATOR is a powerful and practical approach for scalable LLM post-training.

顶级标签: llm reinforcement learning model training
详细标签: curriculum learning policy improvement bandit algorithms post-training automated data selection 或 搜索:

Actor-Curator:一种通过策略改进老虎机实现协同自适应课程学习的强化学习后训练框架 / Actor-Curator: Co-adaptive Curriculum Learning via Policy-Improvement Bandits for RL Post-Training


1️⃣ 一句话总结

这篇论文提出了一个名为Actor-Curator的自动化课程学习框架,它通过一个智能‘策展人’动态选择训练题目来优化大语言模型的强化学习后训练过程,从而显著提升了模型在复杂推理任务上的性能和训练效率。

源自 arXiv: 2602.20532