通过搜索驱动强化学习优化奖励函数以增强大语言模型推理能力 / Enhanced LLM Reasoning by Optimizing Reward Functions with Search-Driven Reinforcement Learning
1️⃣ 一句话总结
本论文提出了一种自动搜索和优化奖励函数的方法,通过让语言模型生成候选奖励、用少量训练步骤筛选并迭代反馈,显著提升了大语言模型在数学推理任务上的表现,实验显示最佳组合比基线方法提升了19%的F1分数。
Mathematical reasoning is a key benchmark for large language models. Reinforcement learning is a standard post-training mechanism for improving the reasoning capabilities of large language models, yet performance remains sensitive to the design of the reward function that drives policy optimization. This paper introduces a search-driven framework that treats the reward specification itself as an object of optimization. The setting of interest is one in which the base model is held fixed and the reward specification is the primary remaining design lever. Candidate reward functions are generated by a frontier language model, validated automatically, screened through 500-step Group Relative Policy Optimization (GRPO) training runs on a Llama-3.2-3B-Instruct base model with Low-Rank Adaptation (LoRA), and ranked by F1 on the GSM8K test set. Ranked summaries from prior rounds are then fed back into the next round of generation. Over five rounds, the search produces 50 candidate rewards. The mean F1 rises from 0.596 in Round 1 to 0.632 in Round 5, and the top individual reward reaches F1 = 0.787. Seven ensemble configurations of top-ranked rewards are evaluated. The best ensemble achieves F1 = 0.795 (95% bootstrap CI [0.756, 0.832]) and accuracy 0.660 [0.635, 0.686], a 0.19 absolute F1 gain over a base-rewards-only GRPO baseline (F1 = 0.609). Pairwise McNemar tests with Bonferroni correction show all five-or-more-reward configurations are statistically indistinguishable at {\alpha} = 0.05/21. A three-seed re-training of the best ensemble yields F1 of 0.785. A randomly drawn 5-reward control collapses to F1 = 0.047, which shows that the ranked-feedback loop, not the additive signal of having more rewards, drives the gain.
通过搜索驱动强化学习优化奖励函数以增强大语言模型推理能力 / Enhanced LLM Reasoning by Optimizing Reward Functions with Search-Driven Reinforcement Learning
本论文提出了一种自动搜索和优化奖励函数的方法,通过让语言模型生成候选奖励、用少量训练步骤筛选并迭代反馈,显著提升了大语言模型在数学推理任务上的表现,实验显示最佳组合比基线方法提升了19%的F1分数。
源自 arXiv: 2605.02073