菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-17
📄 Abstract - Communication-Aware Multi-Agent Reinforcement Learning for Decentralized Cooperative UAV Deployment

Autonomous Unmanned Aerial Vehicle (UAV) swarms are increasingly used as rapidly deployable aerial relays and sensing platforms, yet practical deployments must operate under partial observability and intermittent peer-to-peer links. We present a graph-based multi-agent reinforcement learning framework trained under centralized training with decentralized execution (CTDE): a centralized critic and global state are available only during training, while each UAV executes a shared policy using local observations and messages from nearby neighbors. Our architecture encodes local agent state and nearby entities with an agent-entity attention module, and aggregates inter-UAV messages with neighbor self-attention over a distance-limited communication graph. We evaluate primarily on a cooperative relay deployment task (DroneConnect) and secondarily on an adversarial engagement task (DroneCombat). In DroneConnect, the proposed method achieves high coverage under restricted communication and partial observation (e.g. 74% coverage with M = 5 UAVs and N = 10 nodes) while remaining competitive with a mixed-integer linear programming (MILP) optimization-based offline upper bound, and it generalizes to unseen team sizes without fine-tuning. In the adversarial setting, the same framework transfers without architectural changes and improves win rate over non-communicating baselines.

顶级标签: multi-agents reinforcement learning systems
详细标签: multi-agent reinforcement learning decentralized execution uav swarms communication graph partial observability 或 搜索:

用于去中心化协同无人机部署的通信感知多智能体强化学习 / Communication-Aware Multi-Agent Reinforcement Learning for Decentralized Cooperative UAV Deployment


1️⃣ 一句话总结

这篇论文提出了一种基于图神经网络的多智能体强化学习方法,让一群无人机在只能看到部分环境且通信受限的情况下,通过相互传递消息来协同完成任务,比如高效地为地面节点提供通信中继服务。

源自 arXiv: 2603.16141