菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-17
📄 Abstract - VIEW2SPACE: Studying Multi-View Visual Reasoning from Sparse Observations

Multi-view visual reasoning is essential for intelligent systems that must understand complex environments from sparse and discrete viewpoints, yet existing research has largely focused on single-image or temporally dense video settings. In real-world scenarios, reasoning across views requires integrating partial observations without explicit guidance, while collecting large-scale multi-view data with accurate geometric and semantic annotations remains challenging. To address this gap, we leverage physically grounded simulation to construct diverse, high-fidelity 3D scenes with precise per-view metadata, enabling scalable data generation that remains transferable to real-world settings. Based on this engine, we introduce VIEW2SPACE, a multi-dimensional benchmark for sparse multi-view reasoning, together with a scalable, disjoint training split supporting millions of grounded question-answer pairs. Using this benchmark, a comprehensive evaluation of state-of-the-art vision-language and spatial models reveals that multi-view reasoning remains largely unsolved, with most models performing only marginally above random guessing. We further investigate whether training can bridge this gap. Our proposed Grounded Chain-of-Thought with Visual Evidence substantially improves performance under moderate difficulty, and generalizes to real-world data, outperforming existing approaches in cross-dataset evaluation. We further conduct difficulty-aware scaling analyses across model size, data scale, reasoning depth, and visibility constraints, indicating that while geometric perception can benefit from scaling under sufficient visibility, deep compositional reasoning across sparse views remains a fundamental challenge.

顶级标签: multi-modal benchmark model evaluation
详细标签: multi-view reasoning visual question answering sparse observations 3d scenes grounded reasoning 或 搜索:

视图到空间:从稀疏观测中研究多视图视觉推理 / VIEW2SPACE: Studying Multi-View Visual Reasoning from Sparse Observations


1️⃣ 一句话总结

这篇论文提出了一个名为VIEW2SPACE的新基准测试,用于评估AI系统如何从少数几个不连续的视角理解复杂3D场景,研究发现现有模型在这项任务上表现很差,而作者提出的新训练方法能显著提升性能,并揭示了跨稀疏视图进行深度组合推理仍是一个根本性挑战。

源自 arXiv: 2603.16506