菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-16
📄 Abstract - On the Learning Dynamics of RLVR at the Edge of Competence

Reinforcement learning with verifiable rewards (RLVR) has been a main driver of recent breakthroughs in large reasoning models. Yet it remains a mystery how rewards based solely on final outcomes can help overcome the long-horizon barrier to extended reasoning. To understand this, we develop a theory of the training dynamics of RL for transformers on compositional reasoning tasks. Our theory characterizes how the effectiveness of RLVR is governed by the smoothness of the difficulty spectrum. When data contains abrupt discontinuities in difficulty, learning undergoes grokking-type phase transitions, producing prolonged plateaus before progress recurs. In contrast, a smooth difficulty spectrum leads to a relay effect: persistent gradient signals on easier problems elevate the model's capabilities to the point where harder ones become tractable, resulting in steady and continuous improvement. Our theory explains how RLVR can improve performance at the edge of competence, and suggests that appropriately designed data mixtures can yield scalable gains. As a technical contribution, our analysis develops and adapts tools from Fourier analysis on finite groups to our setting. We validate the predicted mechanisms empirically via synthetic experiments.

顶级标签: reinforcement learning theory model training
详细标签: rlvr learning dynamics transformers compositional reasoning fourier analysis 或 搜索:

论RLVR在能力边缘的学习动态 / On the Learning Dynamics of RLVR at the Edge of Competence


1️⃣ 一句话总结

这篇论文通过理论分析和实验验证,揭示了基于可验证奖励的强化学习如何帮助模型解决复杂推理任务,关键在于训练数据中任务难度的平滑性:平滑的难度谱能产生‘接力效应’实现稳定提升,而突变的难度则会导致学习停滞和突然的‘顿悟’现象。

源自 arXiv: 2602.14872