菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-02
📄 Abstract - Self-Improving VLM Judges Without Human Annotations

Effective judges of Vision-Language Models (VLMs) are crucial for model development. Current methods for training VLM judges mainly rely on large-scale human preference annotations. However, such an approach is costly, and the annotations easily become obsolete as models rapidly improve. In this work, we present a framework to self-train a VLM judge model without any human preference annotations, using only self-synthesized data. Our method is iterative and has three stages: (1) generate diverse multimodal instruction-response pairs at varying quality levels, (2) generate reasoning traces and judgments for each pair, removing the ones that do not match our expected quality levels, and (3) training on correct judge answers and their reasoning traces. We evaluate the resulting judge on Multimodal RewardBench and VL-RewardBench across domains: correctness, preference, reasoning, safety, and visual question-answering. Our method improves a Llama-3.2-11B multimodal judge from 0.38 to 0.51 in overall accuracy on VL-RewardBench, often outperforming much larger models including Llama-3.2-90B, GPT-4o, and Claude 3.5 Sonnet, with particularly strong gains in general, hallucination, and reasoning dimensions. The overall strength of these human-annotation-free results suggest the potential for a future self-judge that evolves alongside rapidly improving VLM capabilities.

顶级标签: model evaluation natural language processing computer vision
详细标签: vision-language models judge model self-training evaluation multimodal 或 搜索:

无需人工标注的自改进视觉语言模型评判器 / Self-Improving VLM Judges Without Human Annotations


1️⃣ 一句话总结

这篇论文提出了一种无需人工标注、仅利用模型自身合成数据就能迭代训练视觉语言模型评判器的新方法,该方法在多个评测维度上超越了包括GPT-4o在内的更大模型,展示了让评判器与模型能力同步进化的潜力。


源自 arXiv: 2512.05145