当视觉语言模型不看就评判:揭示信息量偏见 / When Vision-Language Models Judge Without Seeing: Exposing Informativeness Bias
1️⃣ 一句话总结
这篇论文发现,当前用于自动评估视觉语言模型的‘VLM-as-a-Judge’系统存在一个根本缺陷——它们常常不看图像内容,而盲目偏爱信息量更丰富的答案,为此,作者提出了一种名为BIRCH的新评判范式来纠正这种偏见,显著提升了评判的可靠性。
The reliability of VLM-as-a-Judge is critical for the automatic evaluation of vision-language models (VLMs). Despite recent progress, our analysis reveals that VLM-as-a-Judge often pays limited attention to the image when making decisions. Instead, they often blindly favor the more informative answer, even when they can recognize it conflicts with the image content. We call this problem informativeness bias, which significantly undermines judge reliability. To address it, we propose BIRCH (Balanced Informativeness and CoRrectness with a Truthful AnCHor), a judging paradigm that first corrects inconsistencies with the image content in candidate answers, and then compares the answers against this corrected version. This shifts the judge's focus from informativeness to image-grounded correctness. Experiments on multiple models and benchmarks show that BIRCH reduces informativeness bias by up to 17%, resulting in performance gains of up to 9.8%. Our work reveals an overlooked but fundamental flaw in current VLM-as-a-Judge systems and highlights the need for more principled designs.
当视觉语言模型不看就评判:揭示信息量偏见 / When Vision-Language Models Judge Without Seeing: Exposing Informativeness Bias
这篇论文发现,当前用于自动评估视觉语言模型的‘VLM-as-a-Judge’系统存在一个根本缺陷——它们常常不看图像内容,而盲目偏爱信息量更丰富的答案,为此,作者提出了一种名为BIRCH的新评判范式来纠正这种偏见,显著提升了评判的可靠性。
源自 arXiv: 2604.17768