Interp3R:利用帧与事件进行连续时间三维几何估计 / Interp3R: Continuous-time 3D Geometry Estimation with Frames and Events
1️⃣ 一句话总结
这篇论文提出了首个利用事件相机数据来补充传统图像帧的方法,使得基于点云的三维重建模型能够估计任意时刻的场景深度和相机姿态,实现了从离散到连续时间的三维几何感知。
In recent years, 3D visual foundation models pioneered by pointmap-based approaches such as DUSt3R have attracted a lot of interest, achieving impressive accuracy and strong generalization across diverse scenes. However, these methods are inherently limited to recovering scene geometry only at the discrete time instants when images are captured, leaving the scene evolution during the blind time between consecutive frames largely unexplored. We introduce Interp3R, to the best of our knowledge the first method that enhances pointmap-based models to estimate depth and camera poses at arbitrary time instants. Interp3R leverages asynchronous event data to interpolate pointmaps produced by frame-based models, enabling temporally continuous geometric representations. Depth and camera poses are then jointly recovered by aligning the interpolated pointmaps together with those predicted by the underlying frame-based models into a consistent spatial framework. We train Interp3R exclusively on a synthetic dataset, yet demonstrate strong generalization across a wide range of synthetic and real-world benchmarks. Extensive experiments show that Interp3R outperforms by a considerable margin state-of-the-art baselines that follow a two-stage pipeline of 2D video frame interpolation followed by 3D geometry estimation.
Interp3R:利用帧与事件进行连续时间三维几何估计 / Interp3R: Continuous-time 3D Geometry Estimation with Frames and Events
这篇论文提出了首个利用事件相机数据来补充传统图像帧的方法,使得基于点云的三维重建模型能够估计任意时刻的场景深度和相机姿态,实现了从离散到连续时间的三维几何感知。
源自 arXiv: 2603.14528