菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-19
📄 Abstract - AI Gamestore: Scalable, Open-Ended Evaluation of Machine General Intelligence with Human Games

Rigorously evaluating machine intelligence against the broad spectrum of human general intelligence has become increasingly important and challenging in this era of rapid technological advance. Conventional AI benchmarks typically assess only narrow capabilities in a limited range of human activity. Most are also static, quickly saturating as developers explicitly or implicitly optimize for them. We propose that a more promising way to evaluate human-like general intelligence in AI systems is through a particularly strong form of general game playing: studying how and how well they play and learn to play \textbf{all conceivable human games}, in comparison to human players with the same level of experience, time, or other resources. We define a "human game" to be a game designed by humans for humans, and argue for the evaluative suitability of this space of all such games people can imagine and enjoy -- the "Multiverse of Human Games". Taking a first step towards this vision, we introduce the AI GameStore, a scalable and open-ended platform that uses LLMs with humans-in-the-loop to synthesize new representative human games, by automatically sourcing and adapting standardized and containerized variants of game environments from popular human digital gaming platforms. As a proof of concept, we generated 100 such games based on the top charts of Apple App Store and Steam, and evaluated seven frontier vision-language models (VLMs) on short episodes of play. The best models achieved less than 10\% of the human average score on the majority of the games, and especially struggled with games that challenge world-model learning, memory and planning. We conclude with a set of next steps for building out the AI GameStore as a practical way to measure and drive progress toward human-like general intelligence in machines.

顶级标签: benchmark model evaluation agents
详细标签: general game playing evaluation platform vision-language models human-like intelligence scalable testing 或 搜索:

AI游戏商店:通过人类游戏对机器通用智能进行可扩展、开放式的评估 / AI Gamestore: Scalable, Open-Ended Evaluation of Machine General Intelligence with Human Games


1️⃣ 一句话总结

这篇论文提出了一个名为‘AI游戏商店’的新评估平台,通过让AI系统学习和游玩大量由人类设计、为人类设计的游戏,来更全面、动态地衡量其是否具备接近人类的通用智能,初步测试表明当前顶尖模型在大多数游戏上的表现远不及人类平均水平。

源自 arXiv: 2602.17594