过程奖励学习提升大语言模型推理能力并拓宽推理边界 / PRL: Process Reward Learning Improves LLMs' Reasoning Ability and Broadens the Reasoning Boundary
1️⃣ 一句话总结
这篇论文提出了一种名为过程奖励学习(PRL)的新方法,它通过将最终结果的奖励分解为推理过程中的精细监督信号来训练大语言模型,从而在理论上更严谨、训练上更高效地提升了模型的推理能力和解决复杂问题的上限。
Improving the reasoning abilities of Large Language Models (LLMs) has been a continuous topic recently. But most relevant works are based on outcome rewards at the trajectory level, missing fine-grained supervision during the reasoning process. Other existing training frameworks that try to combine process signals together to optimize LLMs also rely heavily on tedious additional steps like MCTS, training a separate reward model, etc., doing harm to the training efficiency. Moreover, the intuition behind the process signals design lacks rigorous theoretical support, leaving the understanding of the optimization mechanism opaque. In this paper, we propose Process Reward Learning (PRL), which decomposes the entropy regularized reinforcement learning objective into intermediate steps, with rigorous process rewards that could be assigned to models accordingly. Starting from theoretical motivation, we derive the formulation of PRL that is essentially equivalent to the objective of reward maximization plus a KL-divergence penalty term between the policy model and a reference model. However, PRL could turn the outcome reward into process supervision signals, which helps better guide the exploration during RL optimization. From our experiment results, we demonstrate that PRL not only improves the average performance for LLMs' reasoning ability measured by average @ n, but also broadens the reasoning boundary by improving the pass @ n metric. Extensive experiments show the effectiveness of PRL could be verified and generalized.
过程奖励学习提升大语言模型推理能力并拓宽推理边界 / PRL: Process Reward Learning Improves LLMs' Reasoning Ability and Broadens the Reasoning Boundary
这篇论文提出了一种名为过程奖励学习(PRL)的新方法,它通过将最终结果的奖励分解为推理过程中的精细监督信号来训练大语言模型,从而在理论上更严谨、训练上更高效地提升了模型的推理能力和解决复杂问题的上限。
源自 arXiv: 2601.10201