菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-27
📄 Abstract - Structural Pruning of Large Vision Language Models: A Comprehensive Study on Pruning Dynamics, Recovery, and Data Efficiency

While Large Vision Language Models (LVLMs) demonstrate impressive capabilities, their substantial computational and memory requirements pose deployment challenges on resource-constrained edge devices. Current parameter reduction techniques primarily involve training LVLMs from small language models, but these methods offer limited flexibility and remain computationally intensive. We study a complementary route: compressing existing LVLMs by applying structured pruning to the language model backbone, followed by lightweight recovery training. Specifically, we investigate two structural pruning paradigms: layerwise and widthwise pruning, and pair them with supervised finetuning and knowledge distillation on logits and hidden states. Additionally, we assess the feasibility of conducting recovery training with only a small fraction of the available data. Our results show that widthwise pruning generally maintains better performance in low-resource scenarios, where computational resources are limited or there is insufficient finetuning data. As for the recovery training, finetuning only the multimodal projector is sufficient at small compression levels. Furthermore, a combination of supervised finetuning and hidden-state distillation yields optimal recovery across various pruning levels. Notably, effective recovery can be achieved using just 5% of the original data, while retaining over 95% of the original performance. Through empirical study on three representative LVLM families ranging from 3B to 7B parameters, this study offers actionable insights for practitioners to compress LVLMs without extensive computation resources or sufficient data. The code base is available at this https URL.

顶级标签: multi-modal model training model evaluation
详细标签: structural pruning large vision language models pruning dynamics knowledge distillation data efficiency 或 搜索:

大型视觉语言模型的结构性剪枝:关于剪枝动态、恢复与数据效率的全面研究 / Structural Pruning of Large Vision Language Models: A Comprehensive Study on Pruning Dynamics, Recovery, and Data Efficiency


1️⃣ 一句话总结

本文系统研究了如何通过结构性剪枝(逐层或逐宽度)压缩大型视觉语言模型的骨干网络,并结合微调与知识蒸馏进行高效恢复,发现仅用5%的原始数据即可恢复95%以上性能。

源自 arXiv: 2604.24380