菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-02
📄 Abstract - SPARK: Stepwise Process-Aware Rewards for Reference-Free Reinforcement Learning

Process reward models (PRMs) that provide dense, step-level feedback have shown promise for reinforcement learning, yet their adoption remains limited by the need for expensive step-level annotations or ground truth references. We propose SPARK: a three-stage framework where in the first stage a generator model produces diverse solutions and a verifier model evaluates them using parallel scaling (self-consistency) and sequential scaling (meta-critique). In the second stage, we use these verification outputs as synthetic training data to fine-tune generative process reward models, which subsequently serve as reward signals during training. We show that aggregating multiple independent verifications at the step level produces training data for process reward models that surpass ground-truth outcome supervision, achieving 67.5 F1 on ProcessBench (a benchmark for identifying erroneous steps in mathematical reasoning) compared to 66.4 for reference-guided training and 61.9 for GPT-4o. In the final stage, we apply our generative PRM with chain-of-thought verification (PRM-CoT) as the reward model in RL experiments on mathematical reasoning, and introduce format constraints to prevent reward hacking. Using Qwen2.5-Math-7B, we achieve 47.4% average accuracy across six mathematical reasoning benchmarks, outperforming ground-truth-based RLVR (43.9%). Our work enables reference-free RL training that exceeds ground-truth methods, opening new possibilities for domains lacking verifiable answers or accessible ground truth.

顶级标签: reinforcement learning llm model training
详细标签: process reward models mathematical reasoning reward hacking synthetic training data self-consistency 或 搜索:

SPARK:用于无参考强化学习的逐步过程感知奖励 / SPARK: Stepwise Process-Aware Rewards for Reference-Free Reinforcement Learning


1️⃣ 一句话总结

这篇论文提出了一种名为SPARK的三阶段框架,它能在不需要标准答案或详细人工标注的情况下,通过模型自我验证生成高质量的逐步反馈奖励,从而让AI在数学推理等任务上通过强化学习获得比依赖标准答案的传统方法更好的表现。


源自 arXiv: 2512.03244