菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - MeMix: Writing Less, Remembering More for Streaming 3D Reconstruction

Reconstruction is a fundamental task in 3D vision and a fundamental capability for spatial intelligence. Particularly, streaming 3D reconstruction is central to real-time spatial perception, yet existing recurrent online models often suffer from progressive degradation on long sequences due to state drift and forgetting, motivating inference-time remedies. We present MeMix, a training-free, plug-and-play module that improves streaming reconstruction by recasting the recurrent state into a Memory Mixture. MeMix partitions the state into multiple independent memory patches and updates only the least-aligned memory patches while exactly preserving others. This selective update mitigates catastrophic forgetting while retaining $O(1)$ inference memory, and requires no fine-tuning or additional learnable parameters, making it directly applicable to existing recurrent reconstruction models. Across standard benchmarks (ScanNet, 7-Scenes, KITTI, etc.), under identical backbones and inference settings, MeMix reduces reconstruction completeness error by 15.3% on average (up to 40.0%) across 300--500 frame streams on 7-Scenes. The code is available at this https URL

顶级标签: computer vision systems model training
详细标签: 3d reconstruction streaming memory mixture catastrophic forgetting real-time perception 或 搜索:

MeMix:用于流式3D重建的少写多记方法 / MeMix: Writing Less, Remembering More for Streaming 3D Reconstruction


1️⃣ 一句话总结

这篇论文提出了一种名为MeMix的即插即用模块,它通过将记忆状态分割成多个独立区块并选择性更新,有效解决了现有流式3D重建模型在长序列处理中因状态漂移和遗忘导致的性能退化问题,无需额外训练即可显著提升重建精度。

源自 arXiv: 2603.15330