以文本为主,视觉为辅:面向大型视觉语言模型的不对称文本-视觉剪枝方法 / Mostly Text, Smart Visuals: Asymmetric Text-Visual Pruning for Large Vision-Language Models
1️⃣ 一句话总结
这篇论文提出了一种名为ATV-Pruning的新方法,通过发现并利用文本和视觉信息在大型视觉语言模型中不同的重要性,对文本部分进行精细保护、对冗余的视觉部分进行大幅压缩,从而更高效、更准确地实现模型轻量化。
Network pruning is an effective technique for enabling lightweight Large Vision-Language Models (LVLMs), which primarily incorporates both weights and activations into the importance metric. However, existing efforts typically process calibration data from different modalities in a unified manner, overlooking modality-specific behaviors. This raises a critical challenge: how to address the divergent behaviors of textual and visual tokens for accurate pruning of LVLMs. To this end, we systematically investigate the sensitivity of visual and textual tokens to the pruning operation by decoupling their corresponding weights, revealing that: (i) the textual pathway should be calibrated via text tokens, since it exhibits higher sensitivity than the visual pathway; (ii) the visual pathway exhibits high redundancy, permitting even 50% sparsity. Motivated by these insights, we propose a simple yet effective Asymmetric Text-Visual Weight Pruning method for LVLMs, dubbed ATV-Pruning, which establishes the importance metric for accurate weight pruning by selecting the informative tokens from both textual and visual pathways. Specifically, ATV-Pruning integrates two primary innovations: first, a calibration pool is adaptively constructed by drawing on all textual tokens and a subset of visual tokens; second, we devise a layer-adaptive selection strategy to yield important visual tokens. Finally, extensive experiments across standard multimodal benchmarks verify the superiority of our ATV-Pruning over state-of-the-art methods.
以文本为主,视觉为辅:面向大型视觉语言模型的不对称文本-视觉剪枝方法 / Mostly Text, Smart Visuals: Asymmetric Text-Visual Pruning for Large Vision-Language Models
这篇论文提出了一种名为ATV-Pruning的新方法,通过发现并利用文本和视觉信息在大型视觉语言模型中不同的重要性,对文本部分进行精细保护、对冗余的视觉部分进行大幅压缩,从而更高效、更准确地实现模型轻量化。
源自 arXiv: 2603.16001