从“增厚”到“减薄”:基于人类学习动态的奖励塑造方法用于大语言模型推理 / Thickening-to-Thinning: Reward Shaping via Human-Inspired Learning Dynamics for LLM Reasoning
1️⃣ 一句话总结
这篇论文提出了一种名为T2T的动态奖励框架,它模仿人类学习过程,在模型推理错误时鼓励探索更长的解题路径以拓宽思路,在推理正确时则奖励简洁表达以提升效率,从而显著提升大语言模型在数学推理任务上的表现。
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a promising paradigm for enhancing reasoning in Large Language Models (LLMs). However, it frequently encounters challenges such as entropy collapse, excessive verbosity, and insufficient exploration for hard problems. Crucially, existing reward schemes fail to distinguish between the need for extensive search during problem-solving and the efficiency required for mastered knowledge. In this work, we introduce T2T(Thickening-to-Thinning), a dynamic reward framework inspired by human learning processes. Specifically, it implements a dual-phase mechanism: (1) On incorrect attempts, T2T incentivizes "thickening" (longer trajectories) to broaden the search space and explore novel solution paths; (2) Upon achieving correctness, it shifts to "thinning", imposing length penalties to discourage redundancy, thereby fostering model confidence and crystallizing reasoning capabilities. Extensive experiments on mathematical benchmarks (MATH-500, AIME, AMC) across Qwen-series and Deepseek models demonstrate that T2T significantly outperforms standard GRPO and recent baselines, achieving superior performance.
从“增厚”到“减薄”:基于人类学习动态的奖励塑造方法用于大语言模型推理 / Thickening-to-Thinning: Reward Shaping via Human-Inspired Learning Dynamics for LLM Reasoning
这篇论文提出了一种名为T2T的动态奖励框架,它模仿人类学习过程,在模型推理错误时鼓励探索更长的解题路径以拓宽思路,在推理正确时则奖励简洁表达以提升效率,从而显著提升大语言模型在数学推理任务上的表现。
源自 arXiv: 2602.04265