菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-09
📄 Abstract - ViSA-Enhanced Aerial VLN: A Visual-Spatial Reasoning Enhanced Framework for Aerial Vision-Language Navigation

Existing aerial Vision-Language Navigation (VLN) methods predominantly adopt a detection-and-planning pipeline, which converts open-vocabulary detections into discrete textual scene graphs. These approaches are plagued by inadequate spatial reasoning capabilities and inherent linguistic ambiguities. To address these bottlenecks, we propose a Visual-Spatial Reasoning (ViSA) enhanced framework for aerial VLN. Specifically, a triple-phase collaborative architecture is designed to leverage structured visual prompting, enabling Vision-Language Models (VLMs) to perform direct reasoning on image planes without the need for additional training or complex intermediate representations. Comprehensive evaluations on the CityNav benchmark demonstrate that the ViSA-enhanced VLN achieves a 70.3\% improvement in success rate compared to the fully trained state-of-the-art (SOTA) method, elucidating its great potential as a backbone for aerial VLN systems.

顶级标签: multi-modal agents natural language processing
详细标签: vision-language navigation spatial reasoning visual prompting aerial navigation benchmark evaluation 或 搜索:

ViSA增强的空中视觉语言导航:一个视觉-空间推理增强的框架 / ViSA-Enhanced Aerial VLN: A Visual-Spatial Reasoning Enhanced Framework for Aerial Vision-Language Navigation


1️⃣ 一句话总结

这篇论文提出了一种新的视觉-空间推理增强框架,通过结构化视觉提示让视觉语言模型直接在图像上进行推理,无需额外训练,从而显著提升了无人机根据语言指令导航的成功率。

源自 arXiv: 2603.08007