JudgeRLVR:先判断,后生成,实现高效推理 / JudgeRLVR: Judge First, Generate Second for Efficient Reasoning
1️⃣ 一句话总结
这篇论文提出了一种名为JudgeRLVR的新方法,它让大型语言模型先学会判断答案是否正确,然后再基于这种判断能力来生成答案,从而在数学推理任务上实现了既更准确又更简洁的解答。
Reinforcement Learning with Verifiable Rewards (RLVR) has become a standard paradigm for reasoning in Large Language Models. However, optimizing solely for final-answer correctness often drives models into aimless, verbose exploration, where they rely on exhaustive trial-and-error tactics rather than structured planning to reach solutions. While heuristic constraints like length penalties can reduce verbosity, they often truncate essential reasoning steps, creating a difficult trade-off between efficiency and verification. In this paper, we argue that discriminative capability is a prerequisite for efficient generation: by learning to distinguish valid solutions, a model can internalize a guidance signal that prunes the search space. We propose JudgeRLVR, a two-stage judge-then-generate paradigm. In the first stage, we train the model to judge solution responses with verifiable answers. In the second stage, we fine-tune the same model with vanilla generating RLVR initialized from the judge. Compared to Vanilla RLVR using the same math-domain training data, JudgeRLVR achieves a better quality--efficiency trade-off for Qwen3-30B-A3B: on in-domain math, it delivers about +3.7 points average accuracy gain with -42\% average generation length; on out-of-domain benchmarks, it delivers about +4.5 points average accuracy improvement, demonstrating enhanced generalization.
JudgeRLVR:先判断,后生成,实现高效推理 / JudgeRLVR: Judge First, Generate Second for Efficient Reasoning
这篇论文提出了一种名为JudgeRLVR的新方法,它让大型语言模型先学会判断答案是否正确,然后再基于这种判断能力来生成答案,从而在数学推理任务上实现了既更准确又更简洁的解答。
源自 arXiv: 2601.08468