菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-09
📄 Abstract - Scal3R: Scalable Test-Time Training for Large-Scale 3D Reconstruction

This paper addresses the task of large-scale 3D scene reconstruction from long video sequences. Recent feed-forward reconstruction models have shown promising results by directly regressing 3D geometry from RGB images without explicit 3D priors or geometric constraints. However, these methods often struggle to maintain reconstruction accuracy and consistency over long sequences due to limited memory capacity and the inability to effectively capture global contextual cues. In contrast, humans can naturally exploit the global understanding of the scene to inform local perception. Motivated by this, we propose a novel neural global context representation that efficiently compresses and retains long-range scene information, enabling the model to leverage extensive contextual cues for enhanced reconstruction accuracy and consistency. The context representation is realized through a set of lightweight neural sub-networks that are rapidly adapted during test time via self-supervised objectives, which substantially increases memory capacity without incurring significant computational overhead. The experiments on multiple large-scale benchmarks, including the KITTI Odometry~\cite{Geiger2012CVPR} and Oxford Spires~\cite{tao2025spires} datasets, demonstrate the effectiveness of our approach in handling ultra-large scenes, achieving leading pose accuracy and state-of-the-art 3D reconstruction accuracy while maintaining efficiency. Code is available at this https URL.

顶级标签: computer vision model training systems
详细标签: 3d reconstruction test-time training neural representation scene understanding large-scale 或 搜索:

Scal3R:面向大规模三维重建的可扩展测试时训练方法 / Scal3R: Scalable Test-Time Training for Large-Scale 3D Reconstruction


1️⃣ 一句话总结

这篇论文提出了一种新的方法,通过快速自适应的轻量级神经网络来高效压缩和利用长视频中的全局场景信息,从而显著提升大规模三维重建的准确性和一致性,同时保持计算效率。

源自 arXiv: 2604.08542