菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-11
📄 Abstract - GGPT: Geometry Grounded Point Transformer

Recent feed-forward networks have achieved remarkable progress in sparse-view 3D reconstruction by predicting dense point maps directly from RGB images. However, they often suffer from geometric inconsistencies and limited fine-grained accuracy due to the absence of explicit multi-view constraints. We introduce the Geometry-Grounded Point Transformer (GGPT), a framework that augments feed-forward reconstruction with reliable sparse geometric guidance. We first propose an improved Structure-from-Motion pipeline based on dense feature matching and lightweight geometric optimisation to efficiently estimate accurate camera poses and partial 3D point clouds from sparse input views. Building on this foundation, we propose a geometry-guided 3D point transformer that refines dense point maps under explicit partial-geometry supervision using an optimised guidance encoding. Extensive experiments demonstrate that our method provides a principled mechanism for integrating geometric priors with dense feed-forward predictions, producing reconstructions that are both geometrically consistent and spatially complete, recovering fine structures and filling gaps in textureless areas. Trained solely on ScanNet++ with VGGT predictions, GGPT generalises across architectures and datasets, substantially outperforming state-of-the-art feed-forward 3D reconstruction models in both in-domain and out-of-domain settings.

顶级标签: computer vision model training systems
详细标签: 3d reconstruction point cloud structure-from-motion transformer sparse-view 或 搜索:

GGPT:基于几何约束的点云Transformer / GGPT: Geometry Grounded Point Transformer


1️⃣ 一句话总结

这篇论文提出了一种名为GGPT的新方法,它通过引入可靠的稀疏几何信息来指导神经网络,有效解决了现有技术从少数几张图片重建三维模型时常见的几何不一致和细节缺失问题,从而生成了更精确、更完整的三维模型。

源自 arXiv: 2603.11174