📄 论文总结
视觉输入能否被压缩?面向大型多模态模型的视觉令牌压缩基准 / Can Visual Input Be Compressed? A Visual Token Compression Benchmark for Large Multimodal Models
1️⃣ 一句话总结
这篇论文提出了一个名为UniPruneBench的统一基准测试平台,用于系统评估大型多模态模型中视觉令牌压缩方法的性能,发现随机剪枝是一个意外强大的基线方法,且压缩比例是影响模型性能的主要因素。
Large multimodal models (LMMs) often suffer from severe inference inefficiency due to the large number of visual tokens introduced by image encoders. While recent token compression methods, such as pruning and merging, have shown promise in reducing redundancy, their evaluation remains fragmented and inconsistent. In this work, we present UniPruneBench, a unified and extensible benchmark for visual token pruning in multimodal LLMs. UniPruneBench provides standardized protocols across six ability dimensions and ten datasets, covering ten representative compression algorithms and three families of LMMs (LLaVA-v1.5, Intern-VL3, and Qwen2.5-VL). Beyond task accuracy, it incorporates system-level metrics such as runtime and prefilling latency to provide a holistic view. Our experiments uncover several key findings: (1) random pruning is a surprisingly strong baseline, (2) no single method consistently outperforms others across scenarios, (3) pruning sensitivity varies significantly across tasks, with OCR being most vulnerable, and (4) pruning ratio is the dominant factor governing performance degradation. We believe UniPruneBench will serve as a reliable foundation for future research on efficient multimodal modeling.
视觉输入能否被压缩?面向大型多模态模型的视觉令牌压缩基准 / Can Visual Input Be Compressed? A Visual Token Compression Benchmark for Large Multimodal Models
这篇论文提出了一个名为UniPruneBench的统一基准测试平台,用于系统评估大型多模态模型中视觉令牌压缩方法的性能,发现随机剪枝是一个意外强大的基线方法,且压缩比例是影响模型性能的主要因素。