菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-17
📄 Abstract - Less Is More -- Until It Breaks: Security Pitfalls of Vision Token Compression in Large Vision-Language Models

Visual token compression is widely adopted to improve the inference efficiency of Large Vision-Language Models (LVLMs), enabling their deployment in latency-sensitive and resource-constrained scenarios. However, existing work has mainly focused on efficiency and performance, while the security implications of visual token compression remain largely unexplored. In this work, we first reveal that visual token compression substantially degrades the robustness of LVLMs: models that are robust under uncompressed inference become highly vulnerable once compression is enabled. These vulnerabilities are state-specific; failure modes emerge only in the compressed setting and completely disappear when compression is disabled, making them particularly hidden and difficult to diagnose. By analyzing the key stages of the compression process, we identify instability in token importance ranking as the primary cause of this robustness degradation. Small and imperceptible perturbations can significantly alter token rankings, leading the compression mechanism to mistakenly discard task-critical information and ultimately causing model failure. Motivated by this observation, we propose a Compression-Aware Attack to systematically study and exploit this vulnerability. CAA directly targets the token selection mechanism and induces failures exclusively under compressed inference. We further extend this approach to more realistic black-box settings and introduce Transfer CAA, where neither the target model nor the compression configuration is accessible. We further evaluate potential defenses and find that they provide only limited protection. Extensive experiments across models, datasets, and compression methods show that visual token compression significantly undermines robustness, revealing a previously overlooked efficiency-security trade-off.

顶级标签: multi-modal model evaluation systems
详细标签: vision-language models adversarial attack token compression security vulnerability robustness 或 搜索:

少即是多——直到它崩溃:大型视觉语言模型中视觉令牌压缩的安全隐患 / Less Is More -- Until It Breaks: Security Pitfalls of Vision Token Compression in Large Vision-Language Models


1️⃣ 一句话总结

这篇论文发现,为了提高效率而在大型视觉语言模型中压缩视觉令牌,会严重削弱模型的抗干扰能力,使其在面对微小、不易察觉的输入扰动时更容易出错,从而揭示了一个此前被忽视的效率与安全之间的权衡问题。

源自 arXiv: 2601.12042