菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-25
📄 Abstract - See It, Say It, Sorted: An Iterative Training-Free Framework for Visually-Grounded Multimodal Reasoning in LVLMs

Recent large vision-language models (LVLMs) have demonstrated impressive reasoning ability by generating long chain-of-thought (CoT) responses. However, CoT reasoning in multimodal contexts is highly vulnerable to visual hallucination propagation: once an intermediate reasoning step becomes inconsistent with the visual evidence, subsequent steps-even if logically valid-can still lead to incorrect final answers. Existing solutions attempt to mitigate this issue by training models to "think with images" via reinforcement learning (RL). While effective, these methods are costly, model-specific, and difficult to generalize across architectures. Differently, we present a lightweight method that bypasses RL training and provides an iterative, training-free, plug-and-play framework for visually-grounded multimodal reasoning. Our key idea is to supervise each reasoning step at test time with visual evidence, ensuring that every decoded token is justified by corresponding visual cues. Concretely, we construct a textual visual-evidence pool that guides the model's reasoning generation. When existing evidence is insufficient, a visual decider module dynamically extracts additional relevant evidence from the image based on the ongoing reasoning context, expanding the pool until the model achieves sufficient visual certainty to terminate reasoning and produce the final answer. Extensive experiments on multiple LVLM backbones and benchmarks demonstrate the effectiveness of our approach. Our method achieves 16.5%-29.5% improvements on TreeBench and 13.7% RH-AUC gains on RH-Bench, substantially reducing hallucination rates while improving reasoning accuracy without additional training.

顶级标签: llm multi-modal model evaluation
详细标签: multimodal reasoning visual hallucination chain-of-thought training-free benchmark 或 搜索:

看见它,说出它,搞定它:一种用于大型视觉语言模型视觉基础多模态推理的免训练迭代框架 / See It, Say It, Sorted: An Iterative Training-Free Framework for Visually-Grounded Multimodal Reasoning in LVLMs


1️⃣ 一句话总结

这篇论文提出了一种无需额外训练、即插即用的轻量级方法,通过让大型视觉语言模型在推理的每一步都严格依据图像证据来生成回答,有效解决了多模态推理中视觉幻觉传播导致答案错误的问题,显著提升了多个基准测试的准确率。

源自 arXiv: 2602.21497