GroundVTS:用于视频时序定位的多模态大语言模型中的视觉令牌采样 / GroundVTS: Visual Token Sampling in Multimodal Large Language Models for Video Temporal Grounding
1️⃣ 一句话总结
这篇论文提出了一种名为GroundVTS的新方法,它能让视频大语言模型更智能地筛选视频关键片段,而不是均匀采样所有画面,从而显著提升了在视频中精准定位特定时刻的能力。
Video temporal grounding (VTG) is a critical task in video understanding and a key capability for extending video large language models (Vid-LLMs) to broader applications. However, existing Vid-LLMs rely on uniform frame sampling to extract video information, resulting in a sparse distribution of key frames and the loss of crucial temporal cues. To address this limitation, we propose Grounded Visual Token Sampling (GroundVTS), a Vid-LLM architecture that focuses on the most informative temporal segments. GroundVTS employs a fine-grained, query-guided mechanism to filter visual tokens before feeding them into the LLM, thereby preserving essential spatio-temporal information and maintaining temporal coherence. Futhermore, we introduce a progressive optimization strategy that enables the LLM to effectively adapt to the non-uniform distribution of visual features, enhancing its ability to model temporal dependencies and achieve precise video localization. We comprehensively evaluate GroundVTS on three standard VTG benchmarks, where it outperforms existing methods, achieving a 7.7-point improvement in mIoU for moment retrieval and 12.0-point improvement in mAP for highlight detection. Code is available at this https URL.
GroundVTS:用于视频时序定位的多模态大语言模型中的视觉令牌采样 / GroundVTS: Visual Token Sampling in Multimodal Large Language Models for Video Temporal Grounding
这篇论文提出了一种名为GroundVTS的新方法,它能让视频大语言模型更智能地筛选视频关键片段,而不是均匀采样所有画面,从而显著提升了在视频中精准定位特定时刻的能力。
源自 arXiv: 2604.02093