📄 论文总结
空间区域3D感知视觉语言模型 / 3D Aware Region Prompted Vision Language Model
1️⃣ 一句话总结
这篇论文提出了一种能够将2D图像和3D数据统一理解的新模型,用户只需在单张图片或3D空间中简单标注,就能实现跨视角的精确空间推理和测量,无需复杂标注即可应用于真实场景视频分析。
We present Spatial Region 3D (SR-3D) aware vision-language model that connects single-view 2D images and multi-view 3D data through a shared visual token space. SR-3D supports flexible region prompting, allowing users to annotate regions with bounding boxes, segmentation masks on any frame, or directly in 3D, without the need for exhaustive multi-frame labeling. We achieve this by enriching 2D visual features with 3D positional embeddings, which allows the 3D model to draw upon strong 2D priors for more accurate spatial reasoning across frames, even when objects of interest do not co-occur within the same view. Extensive experiments on both general 2D vision language and specialized 3D spatial benchmarks demonstrate that SR-3D achieves state-of-the-art performance, underscoring its effectiveness for unifying 2D and 3D representation space on scene understanding. Moreover, we observe applicability to in-the-wild videos without sensory 3D inputs or ground-truth 3D annotations, where SR-3D accurately infers spatial relationships and metric measurements.
空间区域3D感知视觉语言模型 / 3D Aware Region Prompted Vision Language Model
这篇论文提出了一种能够将2D图像和3D数据统一理解的新模型,用户只需在单张图片或3D空间中简单标注,就能实现跨视角的精确空间推理和测量,无需复杂标注即可应用于真实场景视频分析。