菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-23
📄 Abstract - Learning to Reason in 4D: Dynamic Spatial Understanding for Vision Language Models

Vision-language models (VLM) excel at general understanding yet remain weak at dynamic spatial reasoning (DSR), i.e., reasoning about the evolvement of object geometry and relationship in 3D space over time, largely due to the scarcity of scalable 4D-aware training resources. To bridge this gap across aspects of dataset, benchmark and model, we introduce DSR Suite. First, we propose an automated pipeline that generates multiple-choice question-answer pairs from in-the-wild videos for DSR. By leveraging modern vision foundation models, the pipeline extracts rich geometric and motion information, including camera poses, local point clouds, object masks, orientations, and 3D trajectories. These geometric cues enable the construction of DSR-Train for learning and further human-refined DSR-Bench for evaluation. Compared with previous works, our data emphasize (i) in-the-wild video sources, (ii) object- and scene-level 3D requirements, (iii) viewpoint transformations, (iv) multi-object interactions, and (v) fine-grained, procedural answers. Beyond data, we propose a lightweight Geometry Selection Module (GSM) to seamlessly integrate geometric priors into VLMs, which condenses question semantics and extracts question-relevant knowledge from pretrained 4D reconstruction priors into a compact set of geometry tokens. This targeted extraction avoids overwhelming the model with irrelevant knowledge. Experiments show that integrating DSR-Train and GSM into Qwen2.5-VL-7B significantly enhances its dynamic spatial reasoning capability, while maintaining accuracy on general video understanding benchmarks.

顶级标签: multi-modal natural language processing computer vision
详细标签: dynamic spatial reasoning vision-language models 4d understanding geometry integration video question answering 或 搜索:

学习四维推理:为视觉语言模型赋予动态空间理解能力 / Learning to Reason in 4D: Dynamic Spatial Understanding for Vision Language Models


1️⃣ 一句话总结

这篇论文通过构建一个包含训练数据和评估基准的完整工具套件,并设计一个轻量级模块来整合几何先验知识,显著提升了视觉语言模型对三维物体在时间维度上运动和交互关系的理解与推理能力。

源自 arXiv: 2512.20557