迈向移动视觉:学习基于视觉的主动视角选择 / Toward Ambulatory Vision: Learning Visually-Grounded Active View Selection
1️⃣ 一句话总结
这篇论文提出了一种让AI模型学会主动移动并选择最佳观察视角来回答问题的方法,就像人通过走动来更好地看清事物一样,从而提升了视觉问答系统的性能。
Vision Language Models (VLMs) excel at visual question answering (VQA) but remain limited to snapshot vision, reasoning from static images. In contrast, embodied agents require ambulatory vision, actively moving to obtain more informative views. We introduce Visually Grounded Active View Selection (VG-AVS), a task that selects the most informative next viewpoint using only the visual information in the current image, without relying on scene memory or external knowledge. To support this task, we construct a synthetic dataset with automatically generated paired query-target views and question-answer prompts. We also propose a framework that fine-tunes pretrained VLMs through supervised fine-tuning (SFT) followed by RL-based policy optimization. Our approach achieves strong question answering performance based on viewpoint selection and generalizes robustly to unseen synthetic and real scenes. Furthermore, incorporating our learned VG-AVS framework into existing scene-exploration-based EQA systems improves downstream question-answering accuracy.
迈向移动视觉:学习基于视觉的主动视角选择 / Toward Ambulatory Vision: Learning Visually-Grounded Active View Selection
这篇论文提出了一种让AI模型学会主动移动并选择最佳观察视角来回答问题的方法,就像人通过走动来更好地看清事物一样,从而提升了视觉问答系统的性能。
源自 arXiv: 2512.13250