TRE:在信任区域内鼓励探索 / TRE: Encouraging Exploration in the Trust Region
1️⃣ 一句话总结
这篇论文提出了一种名为‘信任区域熵’的新方法,通过将探索限制在模型可信的范围内,有效解决了大语言模型在强化学习中因盲目探索而性能下降的问题,并在数学推理等多个任务上取得了更好的效果。
Entropy regularization is a standard technique in reinforcement learning (RL) to enhance exploration, yet it yields negligible effects or even degrades performance in Large Language Models (LLMs). We attribute this failure to the cumulative tail risk inherent to LLMs with massive vocabularies and long generation horizons. In such environments, standard global entropy maximization indiscriminately dilutes probability mass into the vast tail of invalid tokens rather than focusing on plausible candidates, thereby disrupting coherent reasoning. To address this, we propose Trust Region Entropy (TRE), a method that encourages exploration strictly within the model's trust region. Extensive experiments across mathematical reasoning (MATH), combinatorial search (Countdown), and preference alignment (HH) tasks demonstrate that TRE consistently outperforms vanilla PPO, standard entropy regularization, and other exploration baselines. Our code is available at this https URL.
TRE:在信任区域内鼓励探索 / TRE: Encouraging Exploration in the Trust Region
这篇论文提出了一种名为‘信任区域熵’的新方法,通过将探索限制在模型可信的范围内,有效解决了大语言模型在强化学习中因盲目探索而性能下降的问题,并在数学推理等多个任务上取得了更好的效果。
源自 arXiv: 2602.03635