MAGE:面向语言智能体的元强化学习框架,用于策略性探索与利用 / MAGE: Meta-Reinforcement Learning for Language Agents toward Strategic Exploration and Exploitation
1️⃣ 一句话总结
这篇论文提出了一个名为MAGE的元强化学习框架,它能让大型语言模型智能体通过多轮训练和反思,学会在动态环境中进行策略性的探索和利用,从而在单人和多智能体任务中都表现得更好,并能适应未见过的对手。
Large Language Model (LLM) agents have demonstrated remarkable proficiency in learned tasks, yet they often struggle to adapt to non-stationary environments with feedback. While In-Context Learning and external memory offer some flexibility, they fail to internalize the adaptive ability required for long-term improvement. Meta-Reinforcement Learning (meta-RL) provides an alternative by embedding the learning process directly within the model. However, existing meta-RL approaches for LLMs focus primarily on exploration in single-agent settings, neglecting the strategic exploitation necessary for multi-agent environments. We propose MAGE, a meta-RL framework that empowers LLM agents for strategic exploration and exploitation. MAGE utilizes a multi-episode training regime where interaction histories and reflections are integrated into the context window. By using the final episode reward as the objective, MAGE incentivizes the agent to refine its strategy based on past experiences. We further combine population-based training with an agent-specific advantage normalization technique to enrich agent diversity and ensure stable learning. Experiment results show that MAGE outperforms existing baselines in both exploration and exploitation tasks. Furthermore, MAGE exhibits strong generalization to unseen opponents, suggesting it has internalized the ability for strategic exploration and exploitation. Code is available at this https URL.
MAGE:面向语言智能体的元强化学习框架,用于策略性探索与利用 / MAGE: Meta-Reinforcement Learning for Language Agents toward Strategic Exploration and Exploitation
这篇论文提出了一个名为MAGE的元强化学习框架,它能让大型语言模型智能体通过多轮训练和反思,学会在动态环境中进行策略性的探索和利用,从而在单人和多智能体任务中都表现得更好,并能适应未见过的对手。
源自 arXiv: 2603.03680