菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-11
📄 Abstract - No More Stale Feedback: Co-Evolving Critics for Open-World Agent Learning

Critique-guided reinforcement learning (RL) has emerged as a powerful paradigm for training LLM agents by augmenting sparse outcome rewards with natural-language feedback. However, current methods often rely on static or offline critic models, which fail to adapt as the policy evolves. In on-policy RL, the agent's error patterns shift over time, causing stationary critics to become stale and providing feedback of diminishing utility. To address this, we introduce ECHO (Evolving Critic for Hindsight-Guided Optimization)}, a framework that jointly optimizes the policy and critic through a synchronized co-evolutionary loop. ECHO utilizes a cascaded rollout mechanism where the critic generates multiple diagnoses for an initial trajectory, followed by policy refinement to enable group-structured advantage estimation. We address the challenge of learning plateaus via a saturation-aware gain shaping objective, which rewards the critic for inducing incremental improvements in high-performing trajectories. By employing dual-track GRPO updates, ECHO ensures the critic's feedback stays synchronized with the evolving policy. Experimental results show that ECHO yields more stable training and higher long-horizon task success across open-world environments.

顶级标签: agents reinforcement learning model training
详细标签: critique-guided rl co-evolution on-policy learning feedback adaptation hindsight optimization 或 搜索:

告别过时反馈:面向开放世界智能体学习的协同进化评价器 / No More Stale Feedback: Co-Evolving Critics for Open-World Agent Learning


1️⃣ 一句话总结

这篇论文提出了一个名为ECHO的新框架,通过让评价模型与智能体策略同步协同进化,解决了传统强化学习中反馈信息容易过时失效的问题,从而在开放世界的复杂任务中实现了更稳定、更高效的训练效果。

源自 arXiv: 2601.06794