基于语言与几何的稀疏体素表征用于整体场景理解 / Language and Geometry Grounded Sparse Voxel Representations for Holistic Scene Understanding
1️⃣ 一句话总结
这项研究提出了一种新方法,通过结合语言和几何信息,在一个统一的框架内同时建模3D场景的外观、语义和几何结构,从而实现了比现有技术更优的整体场景理解和重建效果。
Existing 3D open-vocabulary scene understanding methods mostly emphasize distilling language features from 2D foundation models into 3D feature fields, but largely overlook the synergy among scene appearance, semantics, and geometry. As a result, scene understanding often deviates from the underlying geometric structure of scenes and becomes decoupled from the reconstruction process. In this work, we propose a novel approach that leverages language and geometry grounded sparse voxel representations to comprehensively model appearance, semantics, and geometry within a unified framework. Specifically, we use 3D sparse voxels as primitives and employ an appearance field, a density field, a feature field, and a confidence field to holistically represent a 3D scene. To promote synergy among the appearance, density, and feature fields, we construct a feature modulation module and distill language features from a 2D foundation model into our 3D scene model. In addition, we integrate geometric distillation into feature field distillation to transfer geometric knowledge from a geometry foundation model to our 3D scene representations via depth correlation regularization and pattern consistency regularization. These components work together to synergistically model the appearance, semantics, and geometry of the 3D scene within a unified framework. Extensive experiments demonstrate that our approach achieves superior overall performance compared with state-of-the-art methods in holistic scene understanding and reconstruction.
基于语言与几何的稀疏体素表征用于整体场景理解 / Language and Geometry Grounded Sparse Voxel Representations for Holistic Scene Understanding
这项研究提出了一种新方法,通过结合语言和几何信息,在一个统一的框架内同时建模3D场景的外观、语义和几何结构,从而实现了比现有技术更优的整体场景理解和重建效果。
源自 arXiv: 2602.15734