📄 论文总结
借助视觉专家进行草拟与精修 / Draft and Refine with Visual Experts
1️⃣ 一句话总结
这项研究提出了一种新方法,通过量化模型对图像信息的依赖程度并引入视觉专家反馈,有效减少了大型视觉语言模型在回答时凭空捏造内容的问题,从而提高了答案的准确性和可靠性。
While recent Large Vision-Language Models (LVLMs) exhibit strong multimodal reasoning abilities, they often produce ungrounded or hallucinated responses because they rely too heavily on linguistic priors instead of visual evidence. This limitation highlights the absence of a quantitative measure of how much these models actually use visual information during reasoning. We propose Draft and Refine (DnR), an agent framework driven by a question-conditioned utilization metric. The metric quantifies the model's reliance on visual evidence by first constructing a query-conditioned relevance map to localize question-specific cues and then measuring dependence through relevance-guided probabilistic masking. Guided by this metric, the DnR agent refines its initial draft using targeted feedback from external visual experts. Each expert's output (such as boxes or masks) is rendered as visual cues on the image, and the model is re-queried to select the response that yields the largest improvement in utilization. This process strengthens visual grounding without retraining or architectural changes. Experiments across VQA and captioning benchmarks show consistent accuracy gains and reduced hallucination, demonstrating that measuring visual utilization provides a principled path toward more interpretable and evidence-driven multimodal agent systems. Code is available at this https URL.
借助视觉专家进行草拟与精修 / Draft and Refine with Visual Experts
这项研究提出了一种新方法,通过量化模型对图像信息的依赖程度并引入视觉专家反馈,有效减少了大型视觉语言模型在回答时凭空捏造内容的问题,从而提高了答案的准确性和可靠性。