用于新视角合成的3D高斯泼溅中的预测性光度不确定性 / Predictive Photometric Uncertainty in Gaussian Splatting for Novel View Synthesis
1️⃣ 一句话总结
这项研究为3D高斯泼溅技术增加了一个轻量级模块,能够预测渲染图像中每个像素的可靠程度,从而让这项原本只用于生成逼真画面的技术,也能为自动驾驶等安全关键应用提供可信赖的3D环境地图。
Recent advances in 3D Gaussian Splatting have enabled impressive photorealistic novel view synthesis. However, to transition from a pure rendering engine to a reliable spatial map for autonomous agents and safety-critical applications, knowing where the representation is uncertain is as important as the rendering fidelity itself. We bridge this critical gap by introducing a lightweight, plug-and-play framework for pixel-wise, view-dependent predictive uncertainty estimation. Our post-hoc method formulates uncertainty as a Bayesian-regularized linear least-squares optimization over reconstruction residuals. This architecture-agnostic approach extracts a per-primitive uncertainty channel without modifying the underlying scene representation or degrading baseline visual fidelity. Crucially, we demonstrate that providing this actionable reliability signal successfully translates 3D Gaussian splatting into a trustworthy spatial map, further improving state-of-the-art performance across three critical downstream perception tasks: active view selection, pose-agnostic scene change detection, and pose-agnostic anomaly detection.
用于新视角合成的3D高斯泼溅中的预测性光度不确定性 / Predictive Photometric Uncertainty in Gaussian Splatting for Novel View Synthesis
这项研究为3D高斯泼溅技术增加了一个轻量级模块,能够预测渲染图像中每个像素的可靠程度,从而让这项原本只用于生成逼真画面的技术,也能为自动驾驶等安全关键应用提供可信赖的3D环境地图。
源自 arXiv: 2603.22786