📄
Abstract - EVPO: Explained Variance Policy Optimization for Adaptive Critic Utilization in LLM Post-Training
Reinforcement learning (RL) for LLM post-training faces a fundamental design choice: whether to use a learned critic as a baseline for policy optimization. Classical theory favors critic-based methods such as PPO for variance reduction, yet critic-free alternatives like GRPO have gained widespread adoption due to their simplicity and competitive performance. We show that in sparse-reward settings, a learned critic can inject estimation noise that exceeds the state signal it captures, increasing rather than reducing advantage variance. By casting baseline selection as a Kalman filtering problem, we unify PPO and GRPO as two extremes of the Kalman gain and prove that explained variance (EV), computable from a single training batch, identifies the exact boundary: positive EV indicates the critic reduces variance, while zero or negative EV signals that it inflates variance. Building on this insight, we propose Explained Variance Policy Optimization (EVPO), which monitors batch-level EV at each training step and adaptively switches between critic-based and batch-mean advantage estimation, provably achieving no greater variance than the better of the two at every step. Across four tasks spanning classical control, agentic interaction, and mathematical reasoning, EVPO consistently outperforms both PPO and GRPO regardless of which fixed baseline is stronger on a given task. Further analysis confirms that the adaptive gating tracks critic maturation over training and that the theoretically derived zero threshold is empirically optimal.
可解释方差策略优化:面向大语言模型后训练的自适应评论家利用方法 /
EVPO: Explained Variance Policy Optimization for Adaptive Critic Utilization in LLM Post-Training
1️⃣ 一句话总结
本文提出了一种名为EVPO的新方法,通过在每一步训练中动态判断评论家(critic)模型是否真的能降低策略优化中的方差,从而在经典PPO和简化版GRPO两种方法之间自适应切换,在多种稀疏奖励任务中稳定地取得了比两者都更好的性能。