菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-08
📄 Abstract - FlashVID: Efficient Video Large Language Models via Training-free Tree-based Spatiotemporal Token Merging

Although Video Large Language Models (VLLMs) have shown remarkable capabilities in video understanding, they are required to process high volumes of visual tokens, causing significant computational inefficiency. Existing VLLMs acceleration frameworks usually compress spatial and temporal redundancy independently, which overlooks the spatiotemporal relationships, thereby leading to suboptimal spatiotemporal compression. The highly correlated visual features are likely to change in spatial position, scale, orientation, and other attributes over time due to the dynamic nature of video. Building on this insight, we introduce FlashVID, a training-free inference acceleration framework for VLLMs. Specifically, FlashVID utilizes Attention and Diversity-based Token Selection (ADTS) to select the most representative tokens for basic video representation, then applies Tree-based Spatiotemporal Token Merging (TSTM) for fine-grained spatiotemporal redundancy elimination. Extensive experiments conducted on three representative VLLMs across five video understanding benchmarks demonstrate the effectiveness and generalization of our method. Notably, by retaining only 10% of visual tokens, FlashVID preserves 99.1% of the performance of LLaVA-OneVision. Consequently, FlashVID can serve as a training-free and plug-and-play module for extending long video frames, which enables a 10x increase in video frame input to Qwen2.5-VL, resulting in a relative improvement of 8.6% within the same computational budget. Code is available at this https URL.

顶级标签: video model training natural language processing
详细标签: video llms token merging inference acceleration spatiotemporal compression efficiency 或 搜索:

FlashVID:一种基于无训练树状时空令牌合并的高效视频大语言模型 / FlashVID: Efficient Video Large Language Models via Training-free Tree-based Spatiotemporal Token Merging


1️⃣ 一句话总结

这篇论文提出了一种名为FlashVID的无训练加速框架,它通过智能合并视频中相似或冗余的视觉信息块,让视频大模型在仅处理10%数据量的情况下,就能保持99%以上的理解性能,从而大幅提升了处理长视频的效率。

源自 arXiv: 2602.08024