菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-17
📄 Abstract - Learning to Retrieve Navigable Candidates for Efficient Vision-and-Language Navigation

Vision-and-Language Navigation (VLN) requires an agent to follow natural-language instructions and navigate through previously unseen environments. Recent approaches increasingly employ large language models (LLMs) as high-level navigators due to their flexibility and reasoning capability. However, prompt-based LLM navigation often suffers from inefficient decision-making, as the model must repeatedly interpret instructions from scratch and reason over noisy and verbose navigable candidates at each step. In this paper, we propose a retrieval-augmented framework to improve the efficiency and stability of LLM-based VLN without modifying or fine-tuning the underlying language model. Our approach introduces retrieval at two complementary levels. At the episode level, an instruction-level embedding retriever selects semantically similar successful navigation trajectories as in-context exemplars, providing task-specific priors for instruction grounding. At the step level, an imitation-learned candidate retriever prunes irrelevant navigable directions before LLM inference, reducing action ambiguity and prompt complexity. Both retrieval modules are lightweight, modular, and trained independently of the LLM. We evaluate our method on the Room-to-Room (R2R) benchmark. Experimental results demonstrate consistent improvements in Success Rate, Oracle Success Rate, and SPL on both seen and unseen environments. Ablation studies further show that instruction-level exemplar retrieval and candidate pruning contribute complementary benefits to global guidance and step-wise decision efficiency. These results indicate that retrieval-augmented decision support is an effective and scalable strategy for enhancing LLM-based vision-and-language navigation.

顶级标签: llm agents multi-modal
详细标签: vision-and-language navigation retrieval-augmented efficiency decision-making embodied ai 或 搜索:

学习检索可导航候选对象以实现高效视觉语言导航 / Learning to Retrieve Navigable Candidates for Efficient Vision-and-Language Navigation


1️⃣ 一句话总结

这篇论文提出了一种检索增强框架,通过引入两层轻量级检索模块来为大语言模型导航提供任务先验并过滤无关选项,从而在不修改模型本身的情况下,显著提升了视觉语言导航任务的效率和稳定性。

源自 arXiv: 2602.15724