菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-05
📄 Abstract - Active Video Perception: Iterative Evidence Seeking for Agentic Long Video Understanding

Long video understanding (LVU) is challenging because answering real-world queries often depends on sparse, temporally dispersed cues buried in hours of mostly redundant and irrelevant content. While agentic pipelines improve video reasoning capabilities, prevailing frameworks rely on a query-agnostic captioner to perceive video information, which wastes computation on irrelevant content and blurs fine-grained temporal and spatial information. Motivated by active perception theory, we argue that LVU agents should actively decide what, when, and where to observe, and continuously assess whether the current observation is sufficient to answer the query. We present Active Video Perception (AVP), an evidence-seeking framework that treats the video as an interactive environment and acquires compact, queryrelevant evidence directly from pixels. Concretely, AVP runs an iterative plan-observe-reflect process with MLLM agents. In each round, a planner proposes targeted video interactions, an observer executes them to extract time-stamped evidence, and a reflector evaluates the sufficiency of the evidence for the query, either halting with an answer or triggering further observation. Across five LVU benchmarks, AVP achieves highest performance with significant improvements. Notably, AVP outperforms the best agentic method by 5.7% in average accuracy while only requires 18.4% inference time and 12.4% input tokens.

顶级标签: agents video model evaluation
详细标签: active perception long video understanding multimodal llm evidence seeking agentic reasoning 或 搜索:

主动视频感知:面向智能体长视频理解的迭代式证据搜寻 / Active Video Perception: Iterative Evidence Seeking for Agentic Long Video Understanding


1️⃣ 一句话总结

这篇论文提出了一个名为‘主动视频感知’的新框架,它让AI像侦探一样,在观看长视频时能主动、有选择地寻找与问题相关的关键视觉证据,从而用更少的计算量实现更准确的长视频理解。


源自 arXiv: 2512.05774