菜单

🤖 系统
📄 Abstract - Understanding and Harnessing Sparsity in Unified Multimodal Models

Large multimodal models have achieved remarkable progress in both understanding and generation. Recent efforts pursue unified multimodal models that integrate heterogeneous components to support both capabilities within a single framework. However, such unification introduces inference inefficiencies, e.g., specific tasks or samples may not require the full knowledge or capacity of the unified model. Yet, a systematic understanding of how these inefficiencies manifest across different components remains limited. In this work, we first conduct a systematic analysis of unified multimodal model components using training-free pruning as a probing methodology, considering both depth pruning and width reduction. Our study reveals that the understanding component exhibits notable compressibility in both understanding and generation tasks, which is more pronounced in the latter. In contrast, the generation components are highly sensitive to compression, with performance deteriorating sharply even under moderate compression ratios. To address this limitation, we propose the Mixture-of-Experts (MoE) Adaptation, inspired by the dynamic activation patterns observed across different samples. This approach partitions the generation module into multiple experts and enables sparse activation to restore generation quality. We validate the effectiveness of sparse activation through expert-frozen tuning and further demonstrate that a fully trainable adaptation delivers additional gains. As a result, the adapted BAGEL model achieves performance comparable to the full model while activating only about half of its parameters. The code is released at \href{this https URL}{this link}.

顶级标签: multi-modal model training model evaluation
详细标签: sparsity mixture-of-experts model compression multimodal understanding generation efficiency 或 搜索:

理解与利用统一多模态模型中的稀疏性 / Understanding and Harnessing Sparsity in Unified Multimodal Models


1️⃣ 一句话总结

这篇论文通过分析发现,统一多模态模型中的理解部分可以大幅压缩而不影响性能,但生成部分对压缩非常敏感,为此作者提出了一种基于稀疏激活的专家混合适配方法,使模型在仅激活约一半参数的情况下,就能达到与完整模型相当的性能。


📄 打开原文 PDF