菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-24
📄 Abstract - CrystaL: Spontaneous Emergence of Visual Latents in MLLMs

Multimodal Large Language Models (MLLMs) have achieved remarkable performance by integrating powerful language backbones with large-scale visual encoders. Among these, latent Chain-of-Thought (CoT) methods enable implicit reasoning in continuous hidden states, facilitating seamless vision-language integration and faster inference. However, existing heuristically predefined supervision signals in latent CoT provide limited guidance for preserving critical visual information in intermediate latent states. To address this limitation, we propose CrystaL (Crystallized Latent Reasoning), a single-stage framework with two paths to process intact and corrupted images, respectively. By explicitly aligning the attention patterns and prediction distributions across the two paths, CrystaL crystallizes latent representations into task-relevant visual semantics, without relying on auxiliary annotations or external modules. Extensive experiments on perception-intensive benchmarks demonstrate that CrystaL consistently outperforms state-of-the-art baselines, achieving substantial gains in fine-grained visual understanding while maintaining robust reasoning capabilities.

顶级标签: multi-modal model training natural language processing
详细标签: multimodal llms latent reasoning visual semantics attention alignment visual understanding 或 搜索:

CrystaL:多模态大语言模型中视觉潜在特征的自发涌现 / CrystaL: Spontaneous Emergence of Visual Latents in MLLMs


1️⃣ 一句话总结

这篇论文提出了一个名为CrystaL的单阶段框架,它通过并行处理完整图像和受损图像并显式对齐其内部注意力与预测,使得多模态大模型能够在无需额外标注的情况下,自发地在推理过程中形成并保留与任务高度相关的关键视觉语义信息,从而显著提升了模型在细粒度视觉理解任务上的性能。

源自 arXiv: 2602.20980