IVC-Prune:揭示LVLM中的隐式视觉坐标以实现视觉令牌剪枝 / IVC-Prune: Revealing the Implicit Visual Coordinates in LVLMs for Vision Token Pruning
1️⃣ 一句话总结
本文提出了一种名为IVC-Prune的新方法,通过发现并保留对空间推理至关重要的‘隐式视觉坐标’令牌,在无需额外训练的情况下,将大型视觉语言模型处理高分辨率图像时的视觉令牌数量减少约一半,同时保持甚至提升了模型在多种任务上的性能。
Large Vision-Language Models (LVLMs) achieve impressive performance across multiple tasks. A significant challenge, however, is their prohibitive inference cost when processing high-resolution visual inputs. While visual token pruning has emerged as a promising solution, existing methods that primarily focus on semantic relevance often discard tokens that are crucial for spatial reasoning. We address this gap through a novel insight into \emph{how LVLMs process spatial reasoning}. Specifically, we reveal that LVLMs implicitly establish visual coordinate systems through Rotary Position Embeddings (RoPE), where specific token positions serve as \textbf{implicit visual coordinates} (IVC tokens) that are essential for spatial reasoning. Based on this insight, we propose \textbf{IVC-Prune}, a training-free, prompt-aware pruning strategy that retains both IVC tokens and semantically relevant foreground tokens. IVC tokens are identified by theoretically analyzing the mathematical properties of RoPE, targeting positions at which its rotation matrices approximate identity matrix or the $90^\circ$ rotation matrix. Foreground tokens are identified through a robust two-stage process: semantic seed discovery followed by contextual refinement via value-vector similarity. Extensive evaluations across four representative LVLMs and twenty diverse benchmarks show that IVC-Prune reduces visual tokens by approximately 50\% while maintaining $\geq$ 99\% of the original performance and even achieving improvements on several benchmarks. Source codes are available at this https URL.
IVC-Prune:揭示LVLM中的隐式视觉坐标以实现视觉令牌剪枝 / IVC-Prune: Revealing the Implicit Visual Coordinates in LVLMs for Vision Token Pruning
本文提出了一种名为IVC-Prune的新方法,通过发现并保留对空间推理至关重要的‘隐式视觉坐标’令牌,在无需额外训练的情况下,将大型视觉语言模型处理高分辨率图像时的视觉令牌数量减少约一半,同时保持甚至提升了模型在多种任务上的性能。
源自 arXiv: 2602.03060