4DLangVGGT:基于Transformer的4D语言-视觉几何统一模型 / 4DLangVGGT: 4D Language-Visual Geometry Grounded Transformer
1️⃣ 一句话总结
这篇论文提出了一种名为4DLangVGGT的新型人工智能模型,它能够一次性理解动态三维场景的几何变化并用自然语言描述其中的物体,无需对每个新场景进行耗时优化,从而为机器人、增强现实等应用提供了更高效、通用的场景理解工具。
Constructing 4D language fields is crucial for embodied AI, augmented/virtual reality, and 4D scene understanding, as they provide enriched semantic representations of dynamic environments and enable open-vocabulary querying in complex scenarios. However, existing approaches to 4D semantic field construction primarily rely on scene-specific Gaussian splatting, which requires per-scene optimization, exhibits limited generalization, and is difficult to scale to real-world applications. To address these limitations, we propose 4DLangVGGT, the first Transformer-based feed-forward unified framework for 4D language grounding, that jointly integrates geometric perception and language alignment within a single architecture. 4DLangVGGT has two key components: the 4D Visual Geometry Transformer, StreamVGGT, which captures spatio-temporal geometric representations of dynamic scenes; and the Semantic Bridging Decoder (SBD), which projects geometry-aware features into a language-aligned semantic space, thereby enhancing semantic interpretability while preserving structural fidelity. Unlike prior methods that depend on costly per-scene optimization, 4DLangVGGT can be jointly trained across multiple dynamic scenes and directly applied during inference, achieving both deployment efficiency and strong generalization. This design significantly improves the practicality of large-scale deployment and establishes a new paradigm for open-vocabulary 4D scene understanding. Experiments on HyperNeRF and Neu3D datasets demonstrate that our approach not only generalizes effectively but also achieves state-of-the-art performance, achieving up to 2% gains under per-scene training and 1% improvements under multi-scene training. Our code released in this https URL
4DLangVGGT:基于Transformer的4D语言-视觉几何统一模型 / 4DLangVGGT: 4D Language-Visual Geometry Grounded Transformer
这篇论文提出了一种名为4DLangVGGT的新型人工智能模型,它能够一次性理解动态三维场景的几何变化并用自然语言描述其中的物体,无需对每个新场景进行耗时优化,从而为机器人、增强现实等应用提供了更高效、通用的场景理解工具。