视觉能替代文本在工作记忆中发挥作用吗?来自视觉语言模型空间n-back任务的证据 / Can Vision Replace Text in Working Memory? Evidence from Spatial n-Back in Vision-Language Models
1️⃣ 一句话总结
这篇论文通过一个空间记忆测试发现,视觉语言模型在处理文本信息时的工作记忆表现比处理视觉图像时更准确可靠,揭示了模型在多模态工作记忆中存在计算差异。
Working memory is a central component of intelligent behavior, providing a dynamic workspace for maintaining and updating task-relevant information. Recent work has used n-back tasks to probe working-memory-like behavior in large language models, but it is unclear whether the same probe elicits comparable computations when information is carried in a visual rather than textual code in vision-language models. We evaluate Qwen2.5 and Qwen2.5-VL on a controlled spatial n-back task presented as matched text-rendered or image-rendered grids. Across conditions, models show reliably higher accuracy and d' with text than with vision. To interpret these differences at the process level, we use trial-wise log-probability evidence and find that nominal 2/3-back often fails to reflect the instructed lag and instead aligns with a recency-locked comparison. We further show that grid size alters recent-repeat structure in the stimulus stream, thereby changing interference and error patterns. These results motivate computation-sensitive interpretations of multimodal working memory.
视觉能替代文本在工作记忆中发挥作用吗?来自视觉语言模型空间n-back任务的证据 / Can Vision Replace Text in Working Memory? Evidence from Spatial n-Back in Vision-Language Models
这篇论文通过一个空间记忆测试发现,视觉语言模型在处理文本信息时的工作记忆表现比处理视觉图像时更准确可靠,揭示了模型在多模态工作记忆中存在计算差异。
源自 arXiv: 2602.04355