📄
Abstract - Matching Accuracy, Different Geometry: Evolution Strategies vs GRPO in LLM Post-Training
Evolution Strategies (ES) have emerged as a scalable gradient-free alternative to reinforcement learning based LLM fine-tuning, but it remains unclear whether comparable task performance implies comparable solutions in parameter space. We compare ES and Group Relative Policy Optimization (GRPO) across four tasks in both single-task and sequential continual-learning settings. ES matches or exceeds GRPO in single-task accuracy and remains competitive sequentially when its iteration budget is controlled. Despite this similarity in task performance, the two methods produce markedly different model updates: ES makes much larger changes and induces broader off-task KL drift, whereas GRPO makes smaller, more localized updates. Strikingly, the ES and GRPO solutions are linearly connected with no loss barrier, even though their update directions are nearly orthogonal. We develop an analytical theory of ES that explains all these phenomena within a unified framework, showing how ES can accumulate large off-task movement on weakly informative directions while still making enough progress on the task to match gradient-based RL in downstream accuracy. These results show that gradient-free and gradient-based fine-tuning can reach similarly accurate yet geometrically distinct solutions, with important consequences for forgetting and knowledge preservation. The source code is publicly available: this https URL.
匹配的准确度,不同的几何:大语言模型后训练中的进化策略与GRPO对比 /
Matching Accuracy, Different Geometry: Evolution Strategies vs GRPO in LLM Post-Training
1️⃣ 一句话总结
这篇论文发现,虽然进化策略和基于梯度的强化学习方法在微调大语言模型时能达到相似的任务准确度,但它们在模型参数空间中的更新方式截然不同,前者更新幅度更大、更分散,而后者则更精细、更集中,这为模型的知识保留和遗忘问题提供了新的见解。