菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-24
📄 Abstract - Latent Implicit Visual Reasoning

While Large Multimodal Models (LMMs) have made significant progress, they remain largely text-centric, relying on language as their core reasoning modality. As a result, they are limited in their ability to handle reasoning tasks that are predominantly visual. Recent approaches have sought to address this by supervising intermediate visual steps with helper images, depth maps, or image crops. However, these strategies impose restrictive priors on what "useful" visual abstractions look like, add heavy annotation costs, and struggle to generalize across tasks. To address this critical limitation, we propose a task-agnostic mechanism that trains LMMs to discover and use visual reasoning tokens without explicit supervision. These tokens attend globally and re-encode the image in a task-adaptive way, enabling the model to extract relevant visual information without hand-crafted supervision. Our approach outperforms direct fine-tuning and achieves state-of-the-art results on a diverse range of vision-centric tasks -- including those where intermediate abstractions are hard to specify -- while also generalizing to multi-task instruction tuning.

顶级标签: multi-modal model training natural language processing
详细标签: visual reasoning multimodal models unsupervised learning instruction tuning vision-language 或 搜索:

潜在隐式视觉推理 / Latent Implicit Visual Reasoning


1️⃣ 一句话总结

这项研究提出了一种无需人工标注监督的方法,让大型多模态模型能够自动发现并利用视觉推理标记,从而在多种以视觉为核心的任务上实现更优的泛化性能和推理能力。

源自 arXiv: 2512.21218