菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-16
📄 Abstract - MobileWorldBench: Towards Semantic World Modeling For Mobile Agents

World models have shown great utility in improving the task performance of embodied agents. While prior work largely focuses on pixel-space world models, these approaches face practical limitations in GUI settings, where predicting complex visual elements in future states is often difficult. In this work, we explore an alternative formulation of world modeling for GUI agents, where state transitions are described in natural language rather than predicting raw pixels. First, we introduce MobileWorldBench, a benchmark that evaluates the ability of vision-language models (VLMs) to function as world models for mobile GUI agents. Second, we release MobileWorld, a large-scale dataset consisting of 1.4M samples, that significantly improves the world modeling capabilities of VLMs. Finally, we propose a novel framework that integrates VLM world models into the planning framework of mobile agents, demonstrating that semantic world models can directly benefit mobile agents by improving task success rates. The code and dataset is available at this https URL

顶级标签: agents multi-modal benchmark
详细标签: world modeling gui agents vision-language models mobile agents semantic state transitions 或 搜索:

MobileWorldBench:面向移动智能体的语义世界建模 / MobileWorldBench: Towards Semantic World Modeling For Mobile Agents


1️⃣ 一句话总结

这篇论文提出了一个名为MobileWorldBench的新基准和一个大规模数据集MobileWorld,旨在通过自然语言而非像素预测来构建图形用户界面智能体的语义世界模型,并展示了该模型能有效提升移动智能体执行任务的成功率。


源自 arXiv: 2512.14014