菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-01
📄 Abstract - Multimodal Language Models Cannot Spot Spatial Inconsistencies

Spatial consistency is a fundamental property of the visual world and a key requirement for models that aim to understand physical reality. Despite recent advances, multimodal large language models (MLLMs) often struggle to reason about 3D geometry across multiple views. Rather than asking models to describe scene attributes, we introduce a more challenging task: given two views of the same scene, identify the object that violates 3D motion consistency. We propose a simple and scalable method for generating realistic, spatially inconsistent image pairs from multi-view scenes, enabling systematic evaluation of this capability. Our results show that state-of-the-art MLLMs significantly underperform human observers and exhibit substantial variability across different scene attributes, revealing a fragile and incomplete understanding of 3D structure. We hope our findings underscore the need for approaches that develop a more deeply grounded understanding of the physical world.

顶级标签: multi-modal model evaluation computer vision
详细标签: spatial reasoning 3d consistency multimodal llms evaluation benchmark visual understanding 或 搜索:

多模态大语言模型无法识别空间不一致性 / Multimodal Language Models Cannot Spot Spatial Inconsistencies


1️⃣ 一句话总结

这篇论文通过一项新任务发现,当前先进的多模态大语言模型在识别同一场景不同视角下物体运动的空间不一致性时,表现远不如人类,揭示了模型对三维几何结构的理解仍然脆弱且不完整。

源自 arXiv: 2604.00799