菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-09
📄 Abstract - UniGround: Universal 3D Visual Grounding via Training-Free Scene Parsing

Understanding and localizing objects in complex 3D environments from natural language descriptions, known as 3D Visual Grounding (3DVG), is a foundational challenge in embodied AI, with broad implications for robotics, augmented reality, and human-machine interaction. Large-scale pre-trained foundation models have driven significant progress on this front, enabling open-vocabulary 3DVG that allows systems to locate arbitrary objects in a given scene. However, their reliance on pre-trained models constrains 3D perception and reasoning within the inherited knowledge boundaries, resulting in limited generalization to unseen spatial relationships and poor robustness to out-of-distribution scenes. In this paper, we replace this constrained perception with training-free visual and geometric reasoning, thereby unlocking open-world 3DVG that enables the localization of any object in any scene beyond the training data. Specifically, the proposed UniGround operates in two stages: a Global Candidate Filtering stage that constructs scene candidates through training-free 3D topology and multi-view semantic encoding, and a Local Precision Grounding stage that leverages multi-scale visual prompting and structured reasoning to precisely identify the target object. Experiments on ScanRefer and EmbodiedScan show that UniGround achieves 46.1\%/34.1\% Acc@0.25/0.5 on ScanRefer and 28.7\% Acc@0.25 on EmbodiedScan, establishing a new state-of-the-art among zero-shot methods on EmbodiedScan without any 3D supervision. We further evaluate UniGround in real-world environments under uncontrolled reconstruction conditions and substantial domain shift, showing training-free reasoning generalizes robustly beyond curated benchmarks.

顶级标签: computer vision multi-modal robotics
详细标签: 3d visual grounding scene parsing zero-shot open-vocabulary embodied ai 或 搜索:

UniGround:通过免训练场景解析实现通用3D视觉定位 / UniGround: Universal 3D Visual Grounding via Training-Free Scene Parsing


1️⃣ 一句话总结

这篇论文提出了一种名为UniGround的新方法,它无需额外训练,仅通过视觉和几何推理就能在复杂的三维场景中,根据自然语言描述精准定位任何物体,突破了以往依赖预训练模型的知识局限,在开放世界场景中展现出强大的泛化能力和鲁棒性。

源自 arXiv: 2603.08131