菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - E2EGS: Event-to-Edge Gaussian Splatting for Pose-Free 3D Reconstruction

The emergence of neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS) has advanced novel view synthesis (NVS). These methods, however, require high-quality RGB inputs and accurate corresponding poses, limiting robustness under real-world conditions such as fast camera motion or adverse lighting. Event cameras, which capture brightness changes at each pixel with high temporal resolution and wide dynamic range, enable precise sensing of dynamic scenes and offer a promising solution. However, existing event-based NVS methods either assume known poses or rely on depth estimation models that are bounded by their initial observations, failing to generalize as the camera traverses previously unseen regions. We present E2EGS, a pose-free framework operating solely on event streams. Our key insight is that edge information provides rich structural cues essential for accurate trajectory estimation and high-quality NVS. To extract edges from noisy event streams, we exploit the distinct spatio-temporal characteristics of edges and non-edge regions. The event camera's movement induces consistent events along edges, while non-edge regions produce sparse noise. We leverage this through a patch-based temporal coherence analysis that measures local variance to extract edges while robustly suppressing noise. The extracted edges guide structure-aware Gaussian initialization and enable edge-weighted losses throughout initialization, tracking, and bundle adjustment. Extensive experiments on both synthetic and real datasets demonstrate that E2EGS achieves superior reconstruction quality and trajectory accuracy, establishing a fully pose-free paradigm for event-based 3D reconstruction.

顶级标签: computer vision robotics systems
详细标签: 3d reconstruction event cameras gaussian splatting pose estimation novel view synthesis 或 搜索:

E2EGS:用于无姿态三维重建的事件到边缘高斯泼溅方法 / E2EGS: Event-to-Edge Gaussian Splatting for Pose-Free 3D Reconstruction


1️⃣ 一句话总结

这项研究提出了一种仅使用事件相机数据就能重建三维场景的新方法,它通过智能地从事件流中提取边缘信息来估计相机运动并生成高质量的新视角图像,完全摆脱了对传统RGB图像和预设相机姿态的依赖。

源自 arXiv: 2603.14684