菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-19
📄 Abstract - More Than Meets the Eye: Measuring the Semiotic Gap in Vision-Language Models via Semantic Anchorage

Vision-Language Models (VLMs) excel at photorealistic generation, yet often struggle to represent abstract meaning such as idiomatic interpretations of noun compounds. To study whether high visual fidelity interferes with idiomatic compositionality under visual abstraction, we introduce DIVA, a controlled benchmark that replaces high-fidelity visual detail with schematic iconicity by generating paired, sense-anchored visualizations for literal and idiomatic readings. We further propose Semantic Alignment Gap ($\Delta$), an architecture-agnostic metric that quantifies divergence between literal and idiomatic visual grounding. We additionally introduce a directional signed bias $b(t)$ to separately measure the direction and strength of literal preference. Evaluating 8 recent VLMs, we reveal a consistent Literal Superiority Bias: model scale alone does not resolve literal preference, and increased visual fidelity is associated with weaker symbolic alignment, suggesting cognitive interference from hyper-realistic imagery. Our findings suggest that improving compositional understanding requires iconographic abstraction of visual input and anchoring interpretation and generation in intended meaning.

顶级标签: multi-modal model evaluation llm
详细标签: vision-language models semiotic gap idiomatic compositionality benchmark literal superiority bias 或 搜索:

不止于所见:通过语义锚定测量视觉语言模型中的符号间隙 / More Than Meets the Eye: Measuring the Semiotic Gap in Vision-Language Models via Semantic Anchorage


1️⃣ 一句话总结

本文发现,视觉语言模型虽然在生成逼真图像上表现优异,但过度追求视觉真实感反而会干扰其对抽象语义(如习语)的理解,为此提出了一个基于图符化对比图像的评测基准和量化指标,揭示了模型普遍存在的“字面意义偏好”现象。

源自 arXiv: 2604.17354