菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-09
📄 Abstract - CivBench: Progress-Based Evaluation for LLMs' Strategic Decision-Making in Civilization V

Evaluating strategic decision-making in LLM-based agents requires generative, competitive, and longitudinal environments, yet few benchmarks provide all three, and fewer still offer evaluation signals rich enough for long-horizon, multi-agent play. We introduce CivBench, a benchmark for LLM strategists (i.e., agentic setups) in multiplayer Civilization V. Because terminal win/loss is too sparse a signal in games spanning hundreds of turns and multiple opponents, CivBench trains models on turn-level game state to estimate victory probabilities throughout play, validated through predictive, construct, and convergent validity. Across 307 games with 7 LLMs and multiple CivBench agent conditions, we demonstrate CivBench's potential to estimate strategic capabilities as an unsaturated benchmark, reveal model-specific effects of agentic setup, and outline distinct strategic profiles not visible through outcome-only evaluation.

顶级标签: llm agents benchmark
详细标签: strategic decision-making multi-agent evaluation progress-based metrics game ai long-horizon planning 或 搜索:

CivBench:基于进程的评估——用于评估大语言模型在《文明V》中的战略决策能力 / CivBench: Progress-Based Evaluation for LLMs' Strategic Decision-Making in Civilization V


1️⃣ 一句话总结

这篇论文提出了一个名为CivBench的新评估基准,它通过分析《文明V》游戏过程中每一回合的局势来动态预测胜率,从而更精细、更有效地衡量不同大语言模型在复杂、长期、多智能体竞争环境中的战略决策能力,而不仅仅是看最终输赢结果。

源自 arXiv: 2604.07733