📄
Abstract - Decoupling Reasoning and Confidence: Resurrecting Calibration in Reinforcement Learning from Verifiable Rewards
Reinforcement Learning from Verifiable Rewards (RLVR) significantly enhances large language models (LLMs) reasoning but severely suffers from calibration degeneration, where models become excessively over-confident in incorrect answers. Previous studies devote to directly incorporating calibration objective into existing optimization target. However, our theoretical analysis demonstrates that there exists a fundamental gradient conflict between the optimization for maximizing policy accuracy and minimizing calibration error. Building on this insight, we propose DCPO, a simple yet effective framework that systematically decouples reasoning and calibration objectives. Extensive experiments demonstrate that our DCPO not only preserves accuracy on par with GRPO but also achieves the best calibration performance and substantially mitigates the over-confidence issue. Our study provides valuable insights and practical solution for more reliable LLM deployment.
解耦推理与置信度:在可验证奖励的强化学习中重校准 /
Decoupling Reasoning and Confidence: Resurrecting Calibration in Reinforcement Learning from Verifiable Rewards
1️⃣ 一句话总结
这篇论文发现,在通过可验证奖励训练大语言模型时,追求答案准确性和追求模型对自己的答案有正确的信心(不过度自信)这两个目标是相互冲突的,因此提出了一个名为DCPO的新方法,将这两个目标分开训练,从而在保持答案准确的同时,有效解决了模型对错误答案过度自信的问题。