重建至关重要:通过3D高斯泼溅学习几何对齐的鸟瞰图表示 / Reconstruction Matters: Learning Geometry-Aligned BEV Representation through 3D Gaussian Splatting
1️⃣ 一句话总结
这篇论文提出了一种名为Splat2BEV的新方法,它通过先利用3D高斯泼溅技术从多视角图像中显式地重建三维场景,来获得几何结构更精确的鸟瞰图特征,从而显著提升了自动驾驶中感知任务的性能。
Bird's-Eye-View (BEV) perception serves as a cornerstone for autonomous driving, offering a unified spatial representation that fuses surrounding-view images to enable reasoning for various downstream tasks, such as semantic segmentation, 3D object detection, and motion prediction. However, most existing BEV perception frameworks adopt an end-to-end training paradigm, where image features are directly transformed into the BEV space and optimized solely through downstream task supervision. This formulation treats the entire perception process as a black box, often lacking explicit 3D geometric understanding and interpretability, leading to suboptimal performance. In this paper, we claim that an explicit 3D representation matters for accurate BEV perception, and we propose Splat2BEV, a Gaussian Splatting-assisted framework for BEV tasks. Splat2BEV aims to learn BEV feature representations that are both semantically rich and geometrically precise. We first pre-train a Gaussian generator that explicitly reconstructs 3D scenes from multi-view inputs, enabling the generation of geometry-aligned feature representations. These representations are then projected into the BEV space to serve as inputs for downstream tasks. Extensive experiments on nuScenes and argoverse dataset demonstrate that Splat2BEV achieves state-of-the-art performance and validate the effectiveness of incorporating explicit 3D reconstruction into BEV perception.
重建至关重要:通过3D高斯泼溅学习几何对齐的鸟瞰图表示 / Reconstruction Matters: Learning Geometry-Aligned BEV Representation through 3D Gaussian Splatting
这篇论文提出了一种名为Splat2BEV的新方法,它通过先利用3D高斯泼溅技术从多视角图像中显式地重建三维场景,来获得几何结构更精确的鸟瞰图特征,从而显著提升了自动驾驶中感知任务的性能。
源自 arXiv: 2603.19193