菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-02
📄 Abstract - InfoTok: Regulating Information Flow for Capacity-Constrained Shared Visual Tokenization in Unified MLLMs

Unified multimodal large language models (MLLMs) integrate image understanding and generation in a single framework, with the visual tokenizer acting as the sole interface that maps visual inputs into tokens for downstream tasks. However, existing shared-token designs are mostly architecture-driven and lack an explicit criterion for what information tokens should preserve to support both understanding and generation. Therefore, we introduce a capacity-constrained perspective, highlighting that in shared-token unified MLLMs the visual tokenizer behaves as a compute-bounded learner, so the token budget should prioritize reusable structure over hard-to-exploit high-entropy variations and redundancy. Motivated by this perspective, we propose InfoTok, an information-regularized visual tokenization mechanism grounded in the Information Bottleneck (IB) principle. InfoTok formulates tokenization as controlling information flow from images to shared tokens to multimodal outputs, yielding a principled trade-off between compression and task relevance via mutual-information regularization. We integrate InfoTok into three representative unified MLLMs without introducing any additional training data. Experiments show consistent improvements on both understanding and generation, supporting information-regularized tokenization as a principled foundation for learning a shared token space in unified MLLMs.

顶级标签: multi-modal model training machine learning
详细标签: visual tokenization information bottleneck multimodal llms model compression unified understanding-generation 或 搜索:

InfoTok:面向统一多模态大语言模型中容量受限共享视觉分词的信息流调控 / InfoTok: Regulating Information Flow for Capacity-Constrained Shared Visual Tokenization in Unified MLLMs


1️⃣ 一句话总结

本文提出了一种名为InfoTok的新方法,它通过信息瓶颈原理来调控视觉信息向统一多模态大模型的传递,优先保留对理解和生成任务都有用的核心结构信息,从而在有限的算力资源下,同时提升了模型的理解和生成能力。

源自 arXiv: 2602.01554