菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-09
📄 Abstract - TowerMind: A Tower Defence Game Learning Environment and Benchmark for LLM as Agents

Recent breakthroughs in Large Language Models (LLMs) have positioned them as a promising paradigm for agents, with long-term planning and decision-making emerging as core general-purpose capabilities for adapting to diverse scenarios and tasks. Real-time strategy (RTS) games serve as an ideal testbed for evaluating these two capabilities, as their inherent gameplay requires both macro-level strategic planning and micro-level tactical adaptation and action execution. Existing RTS game-based environments either suffer from relatively high computational demands or lack support for textual observations, which has constrained the use of RTS games for LLM evaluation. Motivated by this, we present TowerMind, a novel environment grounded in the tower defense (TD) subgenre of RTS games. TowerMind preserves the key evaluation strengths of RTS games for assessing LLMs, while featuring low computational demands and a multimodal observation space, including pixel-based, textual, and structured game-state representations. In addition, TowerMind supports the evaluation of model hallucination and provides a high degree of customizability. We design five benchmark levels to evaluate several widely used LLMs under different multimodal input settings. The results reveal a clear performance gap between LLMs and human experts across both capability and hallucination dimensions. The experiments further highlight key limitations in LLM behavior, such as inadequate planning validation, a lack of multifinality in decision-making, and inefficient action use. We also evaluate two classic reinforcement learning algorithms: Ape-X DQN and PPO. By offering a lightweight and multimodal design, TowerMind complements the existing RTS game-based environment landscape and introduces a new benchmark for the AI agent field. The source code is publicly available on GitHub(this https URL).

顶级标签: llm agents benchmark
详细标签: tower defense real-time strategy multimodal environment planning evaluation agent hallucination 或 搜索:

TowerMind:一个用于评估大语言模型作为智能体的塔防游戏学习环境与基准 / TowerMind: A Tower Defence Game Learning Environment and Benchmark for LLM as Agents


1️⃣ 一句话总结

这篇论文提出了一个名为TowerMind的轻量级、多模态塔防游戏环境,用于评估大语言模型在长期规划和实时决策方面的能力,并揭示了当前模型与人类专家在策略制定和避免幻觉方面仍存在明显差距。

源自 arXiv: 2601.05899