菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-24
📄 Abstract - Beyond Memorization: A Multi-Modal Ordinal Regression Benchmark to Expose Popularity Bias in Vision-Language Models

We expose a significant popularity bias in state-of-the-art vision-language models (VLMs), which achieve up to 34% higher accuracy on famous buildings compared to ordinary ones, indicating a reliance on memorization over generalizable understanding. To systematically investigate this, we introduce the largest open benchmark for this task: the YearGuessr dataset, a collection of 55,546 building images with multi-modal attributes from 157 countries, annotated with continuous ordinal labels of their construction year (1001-2024), GPS data, and page-view counts as a proxy for popularity. Using this dataset, we frame the construction year prediction task as ordinal regression and introduce popularity-aware interval accuracy metrics to quantify this bias. Our resulting benchmark of 30+ models, including our YearCLIP model, confirms that VLMs excel on popular, memorized items but struggle significantly with unrecognized subjects, exposing a critical flaw in their reasoning capabilities. Project page: this https URL

顶级标签: computer vision multi-modal model evaluation
详细标签: vision-language models popularity bias ordinal regression benchmark dataset memorization 或 搜索:

超越记忆:一个多模态序数回归基准,用于揭示视觉-语言模型中的流行度偏见 / Beyond Memorization: A Multi-Modal Ordinal Regression Benchmark to Expose Popularity Bias in Vision-Language Models


1️⃣ 一句话总结

这篇论文通过构建一个包含5.5万多张建筑图像的大型数据集,发现当前先进的视觉-语言模型存在严重的流行度偏见,即对知名建筑的识别准确率远高于普通建筑,揭示了模型过度依赖记忆而非真正理解能力的缺陷。

源自 arXiv: 2512.21337