MG-Nav:基于稀疏空间记忆的双尺度视觉导航 / MG-Nav: Dual-Scale Visual Navigation via Sparse Spatial Memory
1️⃣ 一句话总结
这篇论文提出了一个名为MG-Nav的双尺度视觉导航框架,它通过一个紧凑的稀疏空间记忆图来统一全局路径规划和局部避障控制,无需针对特定场景进行训练,就能在陌生环境中实现高效、鲁棒的导航。
We present MG-Nav (Memory-Guided Navigation), a dual-scale framework for zero-shot visual navigation that unifies global memory-guided planning with local geometry-enhanced control. At its core is the Sparse Spatial Memory Graph (SMG), a compact, region-centric memory where each node aggregates multi-view keyframe and object semantics, capturing both appearance and spatial structure while preserving viewpoint diversity. At the global level, the agent is localized on SMG and a goal-conditioned node path is planned via an image-to-instance hybrid retrieval, producing a sequence of reachable waypoints for long-horizon guidance. At the local level, a navigation foundation policy executes these waypoints in point-goal mode with obstacle-aware control, and switches to image-goal mode when navigating from the final node towards the visual target. To further enhance viewpoint alignment and goal recognition, we introduce VGGT-adapter, a lightweight geometric module built on the pre-trained VGGT model, which aligns observation and goal features in a shared 3D-aware space. MG-Nav operates global planning and local control at different frequencies, using periodic re-localization to correct errors. Experiments on HM3D Instance-Image-Goal and MP3D Image-Goal benchmarks demonstrate that MG-Nav achieves state-of-the-art zero-shot performance and remains robust under dynamic rearrangements and unseen scene conditions.
MG-Nav:基于稀疏空间记忆的双尺度视觉导航 / MG-Nav: Dual-Scale Visual Navigation via Sparse Spatial Memory
这篇论文提出了一个名为MG-Nav的双尺度视觉导航框架,它通过一个紧凑的稀疏空间记忆图来统一全局路径规划和局部避障控制,无需针对特定场景进行训练,就能在陌生环境中实现高效、鲁棒的导航。