菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-11
📄 Abstract - Watching, Reasoning, and Searching: A Video Deep Research Benchmark on Open Web for Agentic Video Reasoning

In real-world video question answering scenarios, videos often provide only localized visual cues, while verifiable answers are distributed across the open web; models therefore need to jointly perform cross-frame clue extraction, iterative retrieval, and multi-hop reasoning-based verification. To bridge this gap, we construct the first video deep research benchmark, VideoDR. VideoDR centers on video-conditioned open-domain video question answering, requiring cross-frame visual anchor extraction, interactive web retrieval, and multi-hop reasoning over joint video-web evidence; through rigorous human annotation and quality control, we obtain high-quality video deep research samples spanning six semantic domains. We evaluate multiple closed-source and open-source multimodal large language models under both the Workflow and Agentic paradigms, and the results show that Agentic is not consistently superior to Workflow: its gains depend on a model's ability to maintain the initial video anchors over long retrieval chains. Further analysis indicates that goal drift and long-horizon consistency are the core bottlenecks. In sum, VideoDR provides a systematic benchmark for studying video agents in open-web settings and reveals the key challenges for next-generation video deep research agents.

顶级标签: benchmark multi-modal agents
详细标签: video question answering open-web retrieval multi-hop reasoning agentic evaluation video-web evidence 或 搜索:

观看、推理与搜索:面向开放网络的智能体视频推理深度研究基准 / Watching, Reasoning, and Searching: A Video Deep Research Benchmark on Open Web for Agentic Video Reasoning


1️⃣ 一句话总结

这篇论文提出了首个视频深度研究基准VideoDR,用于评估AI模型如何结合视频线索与网络检索进行多步推理来回答问题,并发现智能体模式并非总是优于流程模式,其成功关键在于能否在长链条检索中保持对初始视频线索的准确追踪。

源自 arXiv: 2601.06943