菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-13
📄 Abstract - Test-time Scaling over Perception: Resolving the Grounding Paradox in Thinking with Images

Recent multimodal large language models (MLLMs) have begun to support Thinking with Images by invoking visual tools such as zooming and cropping during inference. Yet these systems remain brittle in fine-grained visual reasoning because they must decide where to look before they have access to the evidence needed to make that decision correctly. We identify this circular dependency as the Grounding Paradox. To address it, we propose Test-Time Scaling over Perception (TTSP), a framework that treats perception itself as a scalable inference process. TTSP generates multiple exploratory perception traces, filters unreliable traces using entropy-based confidence estimation, distills validated observations into structured knowledge, and iteratively refines subsequent exploration toward unresolved uncertainty. Extensive experiments on high-resolution and general multimodal reasoning benchmarks show that TTSP consistently outperforms strong baselines across backbone sizes, while also exhibiting favorable scalability and token efficiency. Our results suggest that scaling perception at test time is a promising direction for robust multimodal reasoning under perceptual uncertainty.

顶级标签: multi-modal llm model evaluation
详细标签: multimodal reasoning test-time scaling perceptual uncertainty visual grounding iterative refinement 或 搜索:

测试时感知扩展:解决“图像思维”中的定位悖论 / Test-time Scaling over Perception: Resolving the Grounding Paradox in Thinking with Images


1️⃣ 一句话总结

这篇论文提出了一种名为TTSP的新方法,通过让AI模型在推理时像人类一样“多角度观察、筛选信息、整合知识并聚焦疑点”,有效解决了现有多模态模型在需要精细视觉推理时面临的“先看哪里”的决策困境,从而显著提升了其理解和分析复杂图像的能力。

源自 arXiv: 2604.11025