菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-20
📄 Abstract - Too Correct to Learn: Reinforcement Learning on Saturated Reasoning Data

Reinforcement Learning (RL) enhances LLM reasoning, yet a paradox emerges as models scale: strong base models saturate standard benchmarks (e.g., MATH), yielding correct but homogeneous solutions. In such environments, the lack of failure cases causes the advantage signal in group-relative algorithms (e.g., GRPO) to vanish, driving policies into mode collapse. To address this, we propose Constrained Uniform Top-K Sampling (CUTS), a parameter-free decoding strategy enforcing structure-preserving exploration. Unlike standard sampling that follows model biases, CUTS flattens the local optimization landscape by sampling uniformly from constrained high-confidence candidates. We integrate this into Mixed-CUTS, a training framework synergizing exploitative and exploratory rollouts to amplify intra-group advantage variance. Experiments on Qwen3 models demonstrate that our approach prevents policy degeneration and significantly boosts out-of-domain generalization. Notably, Mixed-CUTS improves Pass@1 accuracy on the challenging AIME25 benchmark by up to 15.1% over standard GRPO, validating that maintaining diversity within the semantic manifold is critical for rigorous reasoning.

顶级标签: llm reinforcement learning
详细标签: reasoning mode collapse exploration decoding strategy generalization 或 搜索:

太正确反而学不到:对饱和推理数据的强化学习 / Too Correct to Learn: Reinforcement Learning on Saturated Reasoning Data


1️⃣ 一句话总结

本文发现,当大型语言模型在已有高分数据集上做强化学习时,由于缺少错误样本,一个常用的算法(GRPO)会失去学习信号并导致模型输出变得单一。为解决这个问题,作者提出了一种名为CUTS的采样策略,在不改动模型参数的前提下,强制从高置信度但多样化的候选答案中均匀选择,再结合多类型训练数据,使模型在更难的新题目上成绩提升高达15%以上。

源自 arXiv: 2604.18493