SkillRL:通过递归技能增强强化学习进化智能体 / SkillRL: Evolving Agents via Recursive Skill-Augmented Reinforcement Learning
1️⃣ 一句话总结
这篇论文提出了一个名为SkillRL的新框架,它能让AI智能体像人类一样从过去的经验中自动提炼和积累可复用的高级技能,并通过技能库与策略的协同进化,显著提升其在复杂任务中的表现和适应能力。
Large Language Model (LLM) agents have shown stunning results in complex tasks, yet they often operate in isolation, failing to learn from past experiences. Existing memory-based methods primarily store raw trajectories, which are often redundant and noise-heavy. This prevents agents from extracting high-level, reusable behavioral patterns that are essential for generalization. In this paper, we propose SkillRL, a framework that bridges the gap between raw experience and policy improvement through automatic skill discovery and recursive evolution. Our approach introduces an experience-based distillation mechanism to build a hierarchical skill library SkillBank, an adaptive retrieval strategy for general and task-specific heuristics, and a recursive evolution mechanism that allows the skill library to co-evolve with the agent's policy during reinforcement learning. These innovations significantly reduce the token footprint while enhancing reasoning utility. Experimental results on ALFWorld, WebShop and seven search-augmented tasks demonstrate that SkillRL achieves state-of-the-art performance, outperforming strong baselines over 15.3% and maintaining robustness as task complexity increases. Code is available at this this https URL.
SkillRL:通过递归技能增强强化学习进化智能体 / SkillRL: Evolving Agents via Recursive Skill-Augmented Reinforcement Learning
这篇论文提出了一个名为SkillRL的新框架,它能让AI智能体像人类一样从过去的经验中自动提炼和积累可复用的高级技能,并通过技能库与策略的协同进化,显著提升其在复杂任务中的表现和适应能力。
源自 arXiv: 2602.08234