菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-27
📄 Abstract - LearnPruner: Rethinking Attention-based Token Pruning in Vision Language Models

Vision-Language Models (VLMs) have recently demonstrated remarkable capabilities in visual understanding and reasoning, but they also impose significant computational burdens due to long visual sequence inputs. Recent works address this issue by pruning unimportant visual tokens, achieving substantial computational reduction while maintaining model performance. The core of token pruning lies in determining token importance, with current approaches primarily relying on attention scores from vision encoders or Large Language Models (LLMs). In this paper, we analyze the effectiveness of attention mechanisms in both vision encoders and LLMs. We find that vision encoders suffer from attention sink, leading to poor focus on informative foreground regions, while in LLMs, although prior studies have identified attention bias toward token positions, text-to-vision attention demonstrates resistance to this bias and enables effective pruning guidance in middle layers. Based on these observations, we propose LearnPruner, a two-stage token pruning framework that first removes redundant vision tokens via a learnable pruning module after the vision encoder, then retains only task-relevant tokens in the LLM's middle layer. Experimental results show that our LearnPruner can preserve approximately 95% of the original performance while using only 5.5% of vision tokens, and achieve 3.2$\times$ inference acceleration, demonstrating a superior accuracy-efficiency trade-off.

顶级标签: medical llm
详细标签: token pruning attention mechanism vision language model inference acceleration learnable pruning 或 搜索:

LearnPruner:重新思考视觉语言模型中基于注意力的令牌剪枝方法 / LearnPruner: Rethinking Attention-based Token Pruning in Vision Language Models


1️⃣ 一句话总结

本文提出了一种名为LearnPruner的两阶段视觉令牌剪枝框架,通过分析视觉编码器和语言模型中注意力机制的缺陷,先由可学习剪枝模块剔除冗余视觉信息,再在语言模型中间层保留关键任务令牌,从而在仅使用5.5%视觉令牌的情况下保持约95%的原始性能,并实现3.2倍的推理加速。

源自 arXiv: 2604.23950