菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-08
📄 Abstract - CoV: Chain-of-View Prompting for Spatial Reasoning

Embodied question answering (EQA) in 3D environments often requires collecting context that is distributed across multiple viewpoints and partially occluded. However, most recent vision--language models (VLMs) are constrained to a fixed and finite set of input views, which limits their ability to acquire question-relevant context at inference time and hinders complex spatial reasoning. We propose Chain-of-View (CoV) prompting, a training-free, test-time reasoning framework that transforms a VLM into an active viewpoint reasoner through a coarse-to-fine exploration process. CoV first employs a View Selection agent to filter redundant frames and identify question-aligned anchor views. It then performs fine-grained view adjustment by interleaving iterative reasoning with discrete camera actions, obtaining new observations from the underlying 3D scene representation until sufficient context is gathered or a step budget is reached. We evaluate CoV on OpenEQA across four mainstream VLMs and obtain an average +11.56\% improvement in LLM-Match, with a maximum gain of +13.62\% on Qwen3-VL-Flash. CoV further exhibits test-time scaling: increasing the minimum action budget yields an additional +2.51\% average improvement, peaking at +3.73\% on Gemini-2.5-Flash. On ScanQA and SQA3D, CoV delivers strong performance (e.g., 116 CIDEr / 31.9 EM@1 on ScanQA and 51.1 EM@1 on SQA3D). Overall, these results suggest that question-aligned view selection coupled with open-view search is an effective, model-agnostic strategy for improving spatial reasoning in 3D EQA without additional training.

顶级标签: agents computer vision natural language processing
详细标签: embodied question answering spatial reasoning vision-language models active perception 3d environments 或 搜索:

CoV:用于空间推理的视角链提示 / CoV: Chain-of-View Prompting for Spatial Reasoning


1️⃣ 一句话总结

这篇论文提出了一种无需额外训练、名为‘视角链’的推理框架,它通过让视觉语言模型在3D场景中主动选择并调整观察视角来收集信息,从而显著提升了其在复杂空间问答任务中的表现。

源自 arXiv: 2601.05172