菜单

🤖 系统
📄 Abstract - Monet: Reasoning in Latent Visual Space Beyond Images and Language

"Thinking with images" has emerged as an effective paradigm for advancing visual reasoning, extending beyond text-only chains of thought by injecting visual evidence into intermediate reasoning steps. However, existing methods fall short of human-like abstract visual thinking, as their flexibility is fundamentally limited by external tools. In this work, we introduce Monet, a training framework that enables multimodal large language models (MLLMs) to reason directly within the latent visual space by generating continuous embeddings that function as intermediate visual thoughts. We identify two core challenges in training MLLMs for latent visual reasoning: high computational cost in latent-vision alignment and insufficient supervision over latent embeddings, and address them with a three-stage distillation-based supervised fine-tuning (SFT) pipeline. We further reveal a limitation of applying GRPO to latent reasoning: it primarily enhances text-based reasoning rather than latent reasoning. To overcome this, we propose VLPO (Visual-latent Policy Optimization), a reinforcement learning method that explicitly incorporates latent embeddings into policy gradient updates. To support SFT, we construct Monet-SFT-125K, a high-quality text-image interleaved CoT dataset containing 125K real-world, chart, OCR, and geometry CoTs. Our model, Monet-7B, shows consistent gains across real-world perception and reasoning benchmarks and exhibits strong out-of-distribution generalization on challenging abstract visual reasoning tasks. We also empirically analyze the role of each training component and discuss our early unsuccessful attempts, providing insights for future developments in visual latent reasoning. Our model, data, and code are available at this https URL.

顶级标签: multi-modal model training llm
详细标签: visual reasoning latent space reinforcement learning multimodal llms knowledge distillation 或 搜索:

📄 论文总结

Monet:超越图像和语言的潜在视觉空间推理 / Monet: Reasoning in Latent Visual Space Beyond Images and Language


1️⃣ 一句话总结

这篇论文提出了一个名为Monet的训练框架,通过让多模态大语言模型直接在潜在视觉空间中生成连续的视觉思维嵌入来进行推理,并针对训练挑战设计了专门的优化方法,显著提升了模型在真实世界感知和抽象视觉推理任务上的性能。


📄 打开原文 PDF