📄
Abstract - TimeViper: A Hybrid Mamba-Transformer Vision-Language Model for Efficient Long Video Understanding
We introduce TimeViper, a hybrid vision-language model designed to tackle challenges of long video understanding. Processing long videos demands both an efficient model architecture and an effective mechanism for handling extended temporal contexts. To this end, TimeViper adopts a hybrid Mamba-Transformer backbone that combines the efficiency of state-space models with the expressivity of attention mechanisms. Through this hybrid design, we reveal the vision-to-text information aggregation phenomenon, where information progressively flows from vision tokens to text tokens across increasing LLM depth, resulting in severe vision token redundancy. Motivated by this observation, we propose TransV, a token information transfer module that transfers and compresses vision tokens into instruction tokens while maintaining multimodal understanding capabilities. This design enables TimeViper to process hour-long videos exceeding 10,000 frames. Extensive experiments across multiple benchmarks demonstrate that TimeViper competes with state-of-the-art models while extending frame numbers. We further analyze attention behaviors of both Mamba and Transformer layers, offering new insights into hybrid model interpretability. This work represents an initial step towards developing, interpreting, and compressing hybrid Mamba-Transformer architectures.
📄 论文总结
TimeViper:一种用于高效长视频理解的混合Mamba-Transformer视觉语言模型 /
TimeViper: A Hybrid Mamba-Transformer Vision-Language Model for Efficient Long Video Understanding
1️⃣ 一句话总结
这篇论文提出了一个名为TimeViper的混合模型,它结合了Mamba和Transformer的优势,通过创新的信息压缩技术高效处理长达一小时的视频,在保持高性能的同时大幅提升了长视频理解能力。