瓶颈令牌:用于统一多模态检索 / Bottleneck Tokens for Unified Multimodal Retrieval
1️⃣ 一句话总结
这篇论文提出了一种名为‘瓶颈令牌’的新方法,通过引入一组可学习的令牌作为显式信息聚合器,并配合一种新的训练目标,有效解决了多模态大语言模型在统一检索任务中信息压缩和聚合的难题,从而在多个模态和任务上取得了领先的检索性能。
Adapting decoder-only multimodal large language models (MLLMs) for unified multimodal retrieval faces two structural gaps. First, existing methods rely on implicit pooling, which overloads the hidden state of a standard vocabulary token (e.g., <EOS>) as the sequence-level representation, a mechanism never designed for information aggregation. Second, contrastive fine-tuning specifies what the embedding should match but provides no token-level guidance on how information should be compressed into it. We address both gaps with two complementary components. Architecturally, we introduce Bottleneck Tokens (BToks), a small set of learnable tokens that serve as a fixed-capacity explicit pooling mechanism. For training, we propose Generative Information Condensation: a next-token prediction objective coupled with a Condensation Mask that severs the direct attention path from target tokens to query tokens. All predictive signals are thereby forced through the BToks, converting the generative loss into dense, token-level supervision for semantic compression. At inference time, only the input and BToks are processed in a single forward pass with negligible overhead over conventional last-token pooling. On MMEB-V2 (78 datasets, 3 modalities, 9 meta-tasks), our approach achieves state-of-the-art among 2B-scale methods under comparable data conditions, attaining an Overall score of 59.0 (+3.6 over VLM2Vec-V2) with substantial gains on semantically demanding tasks (e.g., +12.6 on Video-QA).
瓶颈令牌:用于统一多模态检索 / Bottleneck Tokens for Unified Multimodal Retrieval
这篇论文提出了一种名为‘瓶颈令牌’的新方法,通过引入一组可学习的令牌作为显式信息聚合器,并配合一种新的训练目标,有效解决了多模态大语言模型在统一检索任务中信息压缩和聚合的难题,从而在多个模态和任务上取得了领先的检索性能。
源自 arXiv: 2604.11095