菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-01
📄 Abstract - Learn Hard Problems During RL with Reference Guided Fine-tuning

Reinforcement learning (RL) for mathematical reasoning can suffer from reward sparsity: for challenging problems, LLM fails to sample any correct trajectories, preventing RL from receiving meaningful positive feedback. At the same time, there often exist human-written reference solutions along with the problem (e.g., problems from AoPS), but directly fine-tuning on these solutions offers no benefit because models often cannot imitate human proofs that lie outside their own reasoning distribution. We introduce Reference-Guided Fine-Tuning (ReGFT), a simple and effective method that utilizes human-written reference solutions to synthesize positive trajectories on hard problems and train on them before RL. For each problem, we provide the model with a partial reference solution and let it generate its own reasoning trace, ensuring the resulting trajectories remain in the model's reasoning space while still benefiting from reference guidance. Fine-tuning on these reference-guided trajectories increases the number of solvable problems and produces a checkpoint that receives more positive rewards during RL. Across three benchmarks (AIME24, AIME25, BeyondAIME), ReGFT consistently improves supervised accuracy, accelerates DAPO training, and raises the final performance plateau of RL. Our results show that ReGFT effectively overcomes reward sparsity and unlocks stronger RL-based mathematical reasoning.

顶级标签: reinforcement learning llm model training
详细标签: mathematical reasoning reward sparsity fine-tuning reference-guided learning rl training 或 搜索:

通过参考引导微调在强化学习中学习难题 / Learn Hard Problems During RL with Reference Guided Fine-tuning


1️⃣ 一句话总结

这篇论文提出了一种名为ReGFT的新方法,它巧妙地利用人类编写的参考答案来引导大语言模型自己生成解题思路,从而解决了数学推理强化学习中因奖励稀疏而难以训练的问题,最终显著提升了模型在复杂数学问题上的表现。

源自 arXiv: 2603.01223