菜单

🤖 系统
📄 Abstract - Divide, then Ground: Adapting Frame Selection to Query Types for Long-Form Video Understanding

The application of Large Multimodal Models (LMMs) to long-form video understanding is constrained by limited context lengths and the computationally prohibitive cost of processing dense video tokens. Consequently, recent research has focused on query-aware frame selection, methods that often incur significant computational overhead. This paper challenges the assumption that such complex search mechanisms are universally necessary. We first identify and validate a query typology distinguishing between global query and localized query. We demonstrate that while uniform sampling is both effective and efficient for global queries, localized queries indeed necessitate query-aware selection for optimal performance. Building on this insight, we propose DIG, a training-free frame selection framework that adapts its strategy based on the query type. Specifically,DIG employs efficient uniform sampling for global queries while activating a specialized pipeline to extract query-relevant frames for localized queries. Experiments on three long-form video understanding benchmarks demonstrate that DIG consistently outperforms existing baselines and robustly improves LMM performance, even when scaling the input frame count to 256.

顶级标签: multi-modal model evaluation computer vision
详细标签: video understanding frame selection query typology large multimodal models long-form video 或 搜索:

先区分,再定位:根据查询类型调整帧选择策略以实现长视频理解 / Divide, then Ground: Adapting Frame Selection to Query Types for Long-Form Video Understanding


1️⃣ 一句话总结

这篇论文提出了一种名为DIG的智能方法,它先判断用户对长视频的提问是全局性的还是局部性的,然后自动选择最高效的视频帧提取策略,从而在保证理解准确性的同时,大幅降低了计算成本。


📄 打开原文 PDF