菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-16
📄 Abstract - Fluid-Agent Reinforcement Learning

The primary focus of multi-agent reinforcement learning (MARL) has been to study interactions among a fixed number of agents embedded in an environment. However, in the real world, the number of agents is neither fixed nor known a priori. Moreover, an agent can decide to create other agents (for example, a cell may divide, or a company may spin off a division). In this paper, we propose a framework that allows agents to create other agents; we call this a fluid-agent environment. We present game-theoretic solution concepts for fluid-agent games and empirically evaluate the performance of several MARL algorithms within this framework. Our experiments include fluid variants of established benchmarks such as Predator-Prey and Level-Based Foraging, where agents can dynamically spawn, as well as a new environment we introduce that highlights how fluidity can unlock novel solution strategies beyond those observed in fixed-population settings. We demonstrate that this framework yields agent teams that adjust their size dynamically to match environmental demands.

顶级标签: multi-agents reinforcement learning agents
详细标签: dynamic population agent spawning fluid-agent games multi-agent rl population adaptation 或 搜索:

流体智能体强化学习 / Fluid-Agent Reinforcement Learning


1️⃣ 一句话总结

这篇论文提出了一种名为‘流体智能体’的新框架,让强化学习中的智能体能够像细胞分裂或公司拆分部门一样,根据环境需求动态地创建或调整智能体数量,从而解决传统多智能体强化学习中智能体数量固定不变的限制。

源自 arXiv: 2602.14559