VTCBench:视觉语言模型能否理解经过视觉-文本压缩的长上下文? / VTCBench: Can Vision-Language Models Understand Long Context with Vision-Text Compression?
1️⃣ 一句话总结
这篇论文提出了首个评估视觉语言模型在‘视觉-文本压缩’技术下长文本理解能力的基准测试,发现尽管模型能识别压缩图像中的文字,但在需要关联和推理长距离信息时表现不佳,为设计更高效的模型提供了重要参考。
The computational and memory overheads associated with expanding the context window of LLMs severely limit their scalability. A noteworthy solution is vision-text compression (VTC), exemplified by frameworks like DeepSeek-OCR and Glyph, which convert long texts into dense 2D visual representations, thereby achieving token compression ratios of 3x-20x. However, the impact of this high information density on the core long-context capabilities of vision-language models (VLMs) remains under-investigated. To address this gap, we introduce the first benchmark for VTC and systematically assess the performance of VLMs across three long-context understanding settings: VTC-Retrieval, which evaluates the model's ability to retrieve and aggregate information; VTC-Reasoning, which requires models to infer latent associations to locate facts with minimal lexical overlap; and VTC-Memory, which measures comprehensive question answering within long-term dialogue memory. Furthermore, we establish the VTCBench-Wild to simulate diverse input this http URL comprehensively evaluate leading open-source and proprietary models on our benchmarks. The results indicate that, despite being able to decode textual information (e.g., OCR) well, most VLMs exhibit a surprisingly poor long-context understanding ability with VTC-compressed information, failing to capture long associations or dependencies in the this http URL study provides a deep understanding of VTC and serves as a foundation for designing more efficient and scalable VLMs.
VTCBench:视觉语言模型能否理解经过视觉-文本压缩的长上下文? / VTCBench: Can Vision-Language Models Understand Long Context with Vision-Text Compression?
这篇论文提出了首个评估视觉语言模型在‘视觉-文本压缩’技术下长文本理解能力的基准测试,发现尽管模型能识别压缩图像中的文字,但在需要关联和推理长距离信息时表现不佳,为设计更高效的模型提供了重要参考。
源自 arXiv: 2512.15649