菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-19
📄 Abstract - Aligning Agentic World Models via Knowledgeable Experience Learning

Current Large Language Models (LLMs) exhibit a critical modal disconnect: they possess vast semantic knowledge but lack the procedural grounding to respect the immutable laws of the physical world. Consequently, while these agents implicitly function as world models, their simulations often suffer from physical hallucinations-generating plans that are logically sound but physically unexecutable. Existing alignment strategies predominantly rely on resource-intensive training or fine-tuning, which attempt to compress dynamic environmental rules into static model parameters. However, such parametric encapsulation is inherently rigid, struggling to adapt to the open-ended variability of physical dynamics without continuous, costly retraining. To bridge this gap, we introduce WorldMind, a framework that autonomously constructs a symbolic World Knowledge Repository by synthesizing environmental feedback. Specifically, it unifies Process Experience to enforce physical feasibility via prediction errors and Goal Experience to guide task optimality through successful trajectories. Experiments on EB-ALFRED and EB-Habitat demonstrate that WorldMind achieves superior performance compared to baselines with remarkable cross-model and cross-environment transferability.

顶级标签: agents llm systems
详细标签: world models physical grounding experience learning knowledge repository agent alignment 或 搜索:

通过知识化经验学习对齐具身世界模型 / Aligning Agentic World Models via Knowledgeable Experience Learning


1️⃣ 一句话总结

这篇论文提出了一个名为WorldMind的框架,它通过让AI智能体从与环境交互的成功或失败经验中自动学习物理世界的规则,从而解决当前大语言模型虽然知识丰富但不懂物理常识、经常制定出无法实际执行计划的根本问题。

源自 arXiv: 2601.13247