菜单

🤖 系统
📄 Abstract - Some Modalities are More Equal Than Others: Decoding and Architecting Multimodal Integration in MLLMs

Despite remarkable advancements in Multimodal Large Language Models (MLLMs), a fundamental question remains: are MLLMs robust to contradicting modalities? To rigorously study this, we introduce MMA-Bench comprising videos and tasks that probe a model's reliance on specific modalities. Using black-box and white-box interpretability techniques, we provide a critical analysis of the brittleness of both open- and closed-sourced MLLMs. We show that current MLLMs struggle under misaligned audio-visual pairs and simple misleading text, thereby lacking robust multi-modal reasoning. Building on these findings, we propose a modality alignment tuning strategy to teach the model when to prioritize, leverage, or ignore specific modality cues. Through extensive experiments and analysis, we show that our alignment tuning yields demonstrably stronger multimodal grounding. This work provides both interpretability tools and a clear path toward developing MLLMs with intrinsically reliable cross-modal reasoning. Code and dataset will be publicly available.

顶级标签: multi-modal model evaluation natural language processing
详细标签: multimodal robustness modality alignment benchmark interpretability multimodal integration 或 搜索:

并非所有模态都平等:解码与构建多模态大语言模型中的模态整合 / Some Modalities are More Equal Than Others: Decoding and Architecting Multimodal Integration in MLLMs


1️⃣ 一句话总结

这篇论文发现当前的多模态大语言模型在面对相互矛盾的视听或文本信息时容易出错,缺乏稳健的跨模态推理能力,并为此提出了一种新的模态对齐调优方法,以教导模型何时应优先考虑、利用或忽略特定的模态线索,从而提升其多模态理解的可靠性。


📄 打开原文 PDF