菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-13
📄 Abstract - Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs

Reinforcement learning (RL) has become a central paradigm for post-training large language models (LLMs), particularly for complex reasoning tasks, yet it often suffers from exploration collapse: policies prematurely concentrate on a small set of dominant reasoning patterns, improving pass@1 while limiting rollout-level diversity and gains in pass@k. We argue that this failure stems from regularizing local token behavior rather than diversity over sets of solutions. To address this, we propose Uniqueness-Aware Reinforcement Learning, a rollout-level objective that explicitly rewards correct solutions that exhibit rare high-level strategies. Our method uses an LLM-based judge to cluster rollouts for the same problem according to their high-level solution strategies, ignoring superficial variations, and reweights policy advantages inversely with cluster size. As a result, correct but novel strategies receive higher rewards than redundant ones. Across mathematics, physics, and medical reasoning benchmarks, our approach consistently improves pass@$k$ across large sampling budgets and increases the area under the pass@$k$ curve (AUC@$K$) without sacrificing pass@1, while sustaining exploration and uncovering more diverse solution strategies at scale.

顶级标签: llm reinforcement learning model training
详细标签: exploration collapse diversity reward reasoning diversity rollout clustering pass@k optimization 或 搜索:

奖励罕见:面向大语言模型创造性问题解决的独特性感知强化学习 / Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs


1️⃣ 一句话总结

这篇论文提出了一种名为‘独特性感知强化学习’的新方法,通过奖励那些使用罕见但正确的高层次解题策略的答案,有效解决了大语言模型在强化学习训练中探索不足、答案模式单一的问题,从而在不牺牲单次答题准确率的前提下,显著提升了模型在复杂推理任务中生成多样化正确答案的能力。

源自 arXiv: 2601.08763