用户未言明之事:不明确的查询限制了视觉语言模型 / What Users Leave Unsaid: Under-Specified Queries Limit Vision-Language Models
1️⃣ 一句话总结
这篇论文指出,用户真实的图像提问往往信息不完整,这导致当前顶尖的视觉语言模型表现不佳,而将问题描述得更清晰能显著提升模型回答的准确性,揭示了现有模型评估与现实应用之间存在巨大差距。
Current vision-language benchmarks predominantly feature well-structured questions with clear, explicit prompts. However, real user queries are often informal and underspecified. Users naturally leave much unsaid, relying on images to convey context. We introduce HAERAE-Vision, a benchmark of 653 real-world visual questions from Korean online communities (0.76% survival from 86K candidates), each paired with an explicit rewrite, yielding 1,306 query variants in total. Evaluating 39 VLMs, we find that even state-of-the-art models (GPT-5, Gemini 2.5 Pro) achieve under 50% on the original queries. Crucially, query explicitation alone yields 8 to 22 point improvements, with smaller models benefiting most. We further show that even with web search, under-specified queries underperform explicit queries without search, revealing that current retrieval cannot compensate for what users leave unsaid. Our findings demonstrate that a substantial portion of VLM difficulty stem from natural query under-specification instead of model capability, highlighting a critical gap between benchmark evaluation and real-world deployment.
用户未言明之事:不明确的查询限制了视觉语言模型 / What Users Leave Unsaid: Under-Specified Queries Limit Vision-Language Models
这篇论文指出,用户真实的图像提问往往信息不完整,这导致当前顶尖的视觉语言模型表现不佳,而将问题描述得更清晰能显著提升模型回答的准确性,揭示了现有模型评估与现实应用之间存在巨大差距。
源自 arXiv: 2601.06165