EventNeuS:从单个事件相机进行三维网格重建 / EventNeuS: 3D Mesh Reconstruction from a Single Event Camera
1️⃣ 一句话总结
这篇论文提出了一种名为EventNeuS的自监督神经网络模型,它首次将三维符号距离场和密度场学习与事件流数据相结合,仅使用单个事件相机的彩色事件流就能显著提升三维网格重建的精度,在关键指标上比之前最好的方法平均提升了约30%。
Event cameras offer a considerable alternative to RGB cameras in many scenarios. While there are recent works on event-based novel-view synthesis, dense 3D mesh reconstruction remains scarcely explored and existing event-based techniques are severely limited in their 3D reconstruction accuracy. To address this limitation, we present EventNeuS, a self-supervised neural model for learning 3D representations from monocular colour event streams. Our approach, for the first time, combines 3D signed distance function and density field learning with event-based supervision. Furthermore, we introduce spherical harmonics encodings into our model for enhanced handling of view-dependent effects. EventNeuS outperforms existing approaches by a significant margin, achieving 34% lower Chamfer distance and 31% lower mean absolute error on average compared to the best previous method.
EventNeuS:从单个事件相机进行三维网格重建 / EventNeuS: 3D Mesh Reconstruction from a Single Event Camera
这篇论文提出了一种名为EventNeuS的自监督神经网络模型,它首次将三维符号距离场和密度场学习与事件流数据相结合,仅使用单个事件相机的彩色事件流就能显著提升三维网格重建的精度,在关键指标上比之前最好的方法平均提升了约30%。
源自 arXiv: 2602.03847