菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-23
📄 Abstract - LongVideoAgent: Multi-Agent Reasoning with Long Videos

Recent advances in multimodal LLMs and systems that use tools for long-video QA point to the promise of reasoning over hour-long episodes. However, many methods still compress content into lossy summaries or rely on limited toolsets, weakening temporal grounding and missing fine-grained cues. We propose a multi-agent framework in which a master LLM coordinates a grounding agent to localize question-relevant segments and a vision agent to extract targeted textual observations. The master agent plans with a step limit, and is trained with reinforcement learning to encourage concise, correct, and efficient multi-agent cooperation. This design helps the master agent focus on relevant clips via grounding, complements subtitles with visual detail, and yields interpretable trajectories. On our proposed LongTVQA and LongTVQA+ which are episode-level datasets aggregated from TVQA/TVQA+, our multi-agent system significantly outperforms strong non-agent baselines. Experiments also show reinforcement learning further strengthens reasoning and planning for the trained agent. Code and data will be shared at this https URL.

顶级标签: multi-modal agents model evaluation
详细标签: long video understanding multi-agent reasoning reinforcement learning video question answering temporal grounding 或 搜索:

LongVideoAgent:基于多智能体推理的长视频理解框架 / LongVideoAgent: Multi-Agent Reasoning with Long Videos


1️⃣ 一句话总结

这篇论文提出了一个多智能体框架,通过一个主智能体协调两个子智能体来精准定位视频片段并提取视觉细节,从而有效解决了长视频问答中信息丢失和时序定位不准的难题,在多个数据集上显著超越了现有方法。

源自 arXiv: 2512.20618