菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-24
📄 Abstract - Spa3R: Predictive Spatial Field Modeling for 3D Visual Reasoning

While Vision-Language Models (VLMs) exhibit exceptional 2D visual understanding, their ability to comprehend and reason about 3D space--a cornerstone of spatial intelligence--remains superficial. Current methodologies attempt to bridge this domain gap either by relying on explicit 3D modalities or by augmenting VLMs with partial, view-conditioned geometric priors. However, such approaches hinder scalability and ultimately burden the language model with the ill-posed task of implicitly reconstructing holistic 3D geometry from sparse cues. In this paper, we argue that spatial intelligence can emerge inherently from 2D vision alone, rather than being imposed via explicit spatial instruction tuning. To this end, we introduce Spa3R, a self-supervised framework that learns a unified, view-invariant spatial representation directly from unposed multi-view images. Spa3R is built upon the proposed Predictive Spatial Field Modeling (PSFM) paradigm, where Spa3R learns to synthesize feature fields for arbitrary unseen views conditioned on a compact latent representation, thereby internalizing a holistic and coherent understanding of the underlying 3D scene. We further integrate the pre-trained Spa3R Encoder into existing VLMs via a lightweight adapter to form Spa3-VLM, effectively grounding language reasoning in a global spatial context. Experiments on the challenging VSI-Bench demonstrate that Spa3-VLM achieves state-of-the-art accuracy of 58.6% on 3D VQA, significantly outperforming prior methods. These results highlight PSFM as a scalable path toward advancing spatial intelligence. Code is available at this https URL.

顶级标签: computer vision multi-modal model training
详细标签: 3d visual reasoning spatial representation vision-language models self-supervised learning feature fields 或 搜索:

Spa3R:用于三维视觉推理的预测性空间场建模 / Spa3R: Predictive Spatial Field Modeling for 3D Visual Reasoning


1️⃣ 一句话总结

这篇论文提出了一种名为Spa3R的自监督学习框架,它仅从二维多视角图像中就能学习到统一且视角不变的三维空间表征,并通过一个轻量级适配器将这种空间理解能力赋予现有视觉语言模型,从而在三维视觉问答任务上取得了领先的性能。

源自 arXiv: 2602.21186