菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-10
📄 Abstract - CODE-SHARP: Continuous Open-ended Discovery and Evolution of Skills as Hierarchical Reward Programs

Developing agents capable of open-endedly discovering and learning novel skills is a grand challenge in Artificial Intelligence. While reinforcement learning offers a powerful framework for training agents to master complex skills, it typically relies on hand-designed reward functions. This is infeasible for open-ended skill discovery, where the set of meaningful skills is not known a priori. While recent methods have shown promising results towards automating reward function design, they remain limited to refining rewards for pre-defined tasks. To address this limitation, we introduce Continuous Open-ended Discovery and Evolution of Skills as Hierarchical Reward Programs (CODE-SHARP), a novel framework leveraging Foundation Models (FM) to open-endedly expand and refine a hierarchical skill archive, structured as a directed graph of executable reward functions in code. We show that a goal-conditioned agent trained exclusively on the rewards generated by the discovered SHARP skills learns to solve increasingly long-horizon goals in the Craftax environment. When composed by a high-level FM-based planner, the discovered skills enable a single goal-conditioned agent to solve complex, long-horizon tasks, outperforming both pretrained agents and task-specific expert policies by over $134$% on average. We will open-source our code and provide additional videos $\href{this https URL}{here}$.

顶级标签: agents reinforcement learning model training
详细标签: skill discovery hierarchical reinforcement learning reward design foundation models open-ended learning 或 搜索:

CODE-SHARP:作为分层奖励程序的技能的持续开放式发现与演化 / CODE-SHARP: Continuous Open-ended Discovery and Evolution of Skills as Hierarchical Reward Programs


1️⃣ 一句话总结

这篇论文提出了一个名为CODE-SHARP的新框架,它利用基础模型自动发现和演化一系列可执行的技能(以代码形式表示的奖励函数),从而让一个智能体无需人工设计奖励就能自主学会解决越来越复杂的长期任务,并在实验中取得了显著优于传统方法的性能。

源自 arXiv: 2602.10085