菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-05
📄 Abstract - Entropy Ratio Clipping as a Soft Global Constraint for Stable Reinforcement Learning

Large language model post-training relies on reinforcement learning to improve model capability and alignment quality. However, the off-policy training paradigm introduces distribution shift, which often pushes the policy beyond the trust region, leading to training instabilities manifested as fluctuations in policy entropy and unstable gradients. Although PPO-Clip mitigates this issue through importance clipping, it still overlooks the global distributional shift of actions. To address these challenges, we propose using the entropy ratio between the current and previous policies as a new global metric that effectively quantifies the relative change in policy exploration throughout updates. Building on this metric, we introduce an \textbf{Entropy Ratio Clipping} (ERC) mechanism that imposes bidirectional constraints on the entropy ratio. This stabilizes policy updates at the global distribution level and compensates for the inability of PPO-clip to regulate probability shifts of un-sampled actions. We integrate ERC into both DAPO and GPPO reinforcement learning algorithms. Experiments across multiple benchmarks show that ERC consistently improves performance.

顶级标签: reinforcement learning model training theory
详细标签: policy entropy distribution shift ppo-clip training stability off-policy 或 搜索:

熵比率裁剪:一种用于稳定强化学习的软全局约束 / Entropy Ratio Clipping as a Soft Global Constraint for Stable Reinforcement Learning


1️⃣ 一句话总结

这篇论文提出了一种名为‘熵比率裁剪’的新方法,通过控制新旧策略之间熵的全局变化来稳定大语言模型的强化学习训练过程,有效解决了因策略分布偏移导致的训练不稳定问题。


源自 arXiv: 2512.05591