ADORA:基于动态优势估计的强化学习推理模型训练 / ADORA: Training Reasoning Models with Dynamic Advantage Estimation on Reinforcement Learning
1️⃣ 一句话总结
这篇论文提出了一种名为ADORA的新方法,它通过动态评估训练样本的价值来改进强化学习中的策略优化,从而让推理模型在数学和几何等复杂任务上学得更快、更稳定。
Reinforcement learning has become a cornerstone technique for developing reasoning models in complex tasks, ranging from mathematical problem-solving to imaginary reasoning. The optimization of these models typically relies on policy gradient methods, whose efficacy hinges on the accurate estimation of an advantage function. However, prevailing methods typically employ static advantage estimation, a practice that leads to inefficient credit assignment by neglecting the dynamic utility of training samples over time. This limitation results in suboptimal policy updates, which in turn manifest as slower convergence rates and increased learning instability, as models fail to adapt to evolving sample utilities effectively. To address this problem, we introduce \textbf{ADORA} (\textbf{A}dvantage \textbf{D}ynamics via \textbf{O}nline \textbf{R}ollout \textbf{A}daptation), a novel framework for policy optimization. ADORA dynamically adjusts the advantage function's weighting by adaptively categorizing training data into temporarily advantageous and disadvantageous samples, based on their evolving utility during online model rollouts. This tailored data differentiation strategy allows ADORA to be seamlessly integrated into existing policy optimization algorithms without significant architectural modifications, enabling the policy to prioritize learning from more informative experiences and thereby achieve more efficient policy updates. Extensive evaluations across diverse model families and varying data scales demonstrate that ADORA is a robust and efficient framework. It significantly enhances long reasoning in both geometric and mathematical tasks, consistently achieving notable performance gains without requiring sensitive hyperparameter tuning.
ADORA:基于动态优势估计的强化学习推理模型训练 / ADORA: Training Reasoning Models with Dynamic Advantage Estimation on Reinforcement Learning
这篇论文提出了一种名为ADORA的新方法,它通过动态评估训练样本的价值来改进强化学习中的策略优化,从而让推理模型在数学和几何等复杂任务上学得更快、更稳定。
源自 arXiv: 2602.10019