用于多行程重建的外观分解高斯泼溅方法 / Appearance Decomposition Gaussian Splatting for Multi-Traversal Reconstruction
1️⃣ 一句话总结
这篇论文提出了一种名为ADM-GS的新方法,它通过将静态场景的外观分解为不受拍摄行程影响的‘材质’和受行程影响的‘光照’两部分,有效解决了自动驾驶仿真中因不同时间、不同光照条件导致的多段场景视频重建外观不一致的难题,从而实现了更高质量、更一致的数字场景重建。
Multi-traversal scene reconstruction is important for high-fidelity autonomous driving simulation and digital twin construction. This task involves integrating multiple sequences captured from the same geographical area at different times. In this context, a primary challenge is the significant appearance inconsistency across traversals caused by varying illumination and environmental conditions, despite the shared underlying geometry. This paper presents ADM-GS (Appearance Decomposition Gaussian Splatting for Multi-Traversal Reconstruction), a framework that applies an explicit appearance decomposition to the static background to alleviate appearance entanglement across traversals. For the static background, we decompose the appearance into traversal-invariant material, representing intrinsic material properties, and traversal-dependent illumination, capturing lighting variations. Specifically, we propose a neural light field that utilizes a frequency-separated hybrid encoding strategy. By incorporating surface normals and explicit reflection vectors, this design separately captures low-frequency diffuse illumination and high-frequency specular reflections. Quantitative evaluations on the Argoverse 2 and Waymo Open datasets demonstrate the effectiveness of ADM-GS. In multi-traversal experiments, our method achieves a +0.98 dB PSNR improvement over existing latent-based baselines while producing more consistent appearance across traversals. Code will be available at this https URL.
用于多行程重建的外观分解高斯泼溅方法 / Appearance Decomposition Gaussian Splatting for Multi-Traversal Reconstruction
这篇论文提出了一种名为ADM-GS的新方法,它通过将静态场景的外观分解为不受拍摄行程影响的‘材质’和受行程影响的‘光照’两部分,有效解决了自动驾驶仿真中因不同时间、不同光照条件导致的多段场景视频重建外观不一致的难题,从而实现了更高质量、更一致的数字场景重建。
源自 arXiv: 2604.05908