智能体驱动的超长视频理解 / Agentic Very Long Video Understanding
1️⃣ 一句话总结
这项研究提出了一种名为EGAgent的新框架,它利用实体场景图来帮助AI助手理解和推理持续数天甚至数周的个人穿戴设备拍摄的超长视频,从而在复杂的长时视频理解任务上取得了领先的性能。
The advent of always-on personal AI assistants, enabled by all-day wearable devices such as smart glasses, demands a new level of contextual understanding, one that goes beyond short, isolated events to encompass the continuous, longitudinal stream of egocentric video. Achieving this vision requires advances in long-horizon video understanding, where systems must interpret and recall visual and audio information spanning days or even weeks. Existing methods, including large language models and retrieval-augmented generation, are constrained by limited context windows and lack the ability to perform compositional, multi-hop reasoning over very long video streams. In this work, we address these challenges through EGAgent, an enhanced agentic framework centered on entity scene graphs, which represent people, places, objects, and their relationships over time. Our system equips a planning agent with tools for structured search and reasoning over these graphs, as well as hybrid visual and audio search capabilities, enabling detailed, cross-modal, and temporally coherent reasoning. Experiments on the EgoLifeQA and Video-MME (Long) datasets show that our method achieves state-of-the-art performance on EgoLifeQA (57.5%) and competitive performance on Video-MME (Long) (74.1%) for complex longitudinal video understanding tasks.
智能体驱动的超长视频理解 / Agentic Very Long Video Understanding
这项研究提出了一种名为EGAgent的新框架,它利用实体场景图来帮助AI助手理解和推理持续数天甚至数周的个人穿戴设备拍摄的超长视频,从而在复杂的长时视频理解任务上取得了领先的性能。
源自 arXiv: 2601.18157