菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-30
📄 Abstract - Evolutionary Discovery of Reinforcement Learning Algorithms via Large Language Models

Reinforcement learning algorithms are defined by their learning update rules, which are typically hand-designed and fixed. We present an evolutionary framework for discovering reinforcement learning algorithms by searching directly over executable update rules that implement complete training procedures. The approach builds on REvolve, an evolutionary system that uses large language models as generative variation operators, and extends it from reward-function discovery to algorithm discovery. To promote the emergence of nonstandard learning rules, the search excludes canonical mechanisms such as actor--critic structures, temporal-difference losses, and value bootstrapping. Because reinforcement learning algorithms are highly sensitive to internal scalar parameters, we introduce a post-evolution refinement stage in which a large language model proposes feasible hyperparameter ranges for each evolved update rule. Evaluated end-to-end by full training runs on multiple Gymnasium benchmarks, the discovered algorithms achieve competitive performance relative to established baselines, including SAC, PPO, DQN, and A2C.

顶级标签: reinforcement learning llm model training
详细标签: algorithm discovery evolutionary search hyperparameter refinement update rule generation gymnasium benchmark 或 搜索:

利用大型语言模型通过进化方法发现强化学习算法 / Evolutionary Discovery of Reinforcement Learning Algorithms via Large Language Models


1️⃣ 一句话总结

这篇论文提出了一种新方法,利用大型语言模型作为‘进化引擎’,自动生成和优化强化学习算法的核心更新规则,从而发现了性能媲美主流人工设计算法的新算法。

源自 arXiv: 2603.28416