Script:面向多模态大语言模型的图结构与查询条件语义令牌剪枝方法 / Script: Graph-Structured and Query-Conditioned Semantic Token Pruning for Multimodal Large Language Models
1️⃣ 一句话总结
本文提出了一种名为Script的即插即用令牌剪枝方法,通过结合图结构剪枝和查询条件语义剪枝,在无需重新训练的情况下,显著提升了多模态大模型处理图像和视频时的计算效率与任务准确性。
The rapid growth of visual tokens in multimodal large language models (MLLMs) leads to excessive memory consumption and inference latency, especially when handling high-resolution images and videos. Token pruning is a technique used to mitigate this issue by removing redundancy, but existing methods often ignore relevance to the user query or suffer from the limitations of attention mechanisms, reducing their adaptability and effectiveness. To address these challenges, we propose Script, a plug-and-play pruning method that requires no retraining and generalizes across diverse MLLMs. Script comprises two modules: a graph-structured pruning module that removes visually redundant tokens, and a query-conditioned semantic pruning module that preserves query-relevant visual information. Together, they enhance performance on multimodal tasks. Experiments on fourteen benchmarks across image and video understanding tasks show that Script consistently achieves higher model efficiency and predictive accuracy compared to existing pruning methods. On LLaVA-NeXT-7B, it achieves up to 6.8x prefill speedup and 10x FLOP reduction, while retaining 96.88% of the original performance.
Script:面向多模态大语言模型的图结构与查询条件语义令牌剪枝方法 / Script: Graph-Structured and Query-Conditioned Semantic Token Pruning for Multimodal Large Language Models
本文提出了一种名为Script的即插即用令牌剪枝方法,通过结合图结构剪枝和查询条件语义剪枝,在无需重新训练的情况下,显著提升了多模态大模型处理图像和视频时的计算效率与任务准确性。