菜单

🤖 系统
📄 Abstract - LongVT: Incentivizing "Thinking with Long Videos" via Native Tool Calling

Large multimodal models (LMMs) have shown great potential for video reasoning with textual Chain-of-Thought. However, they remain vulnerable to hallucinations, especially when processing long-form videos where evidence is sparse and temporally dispersed. Inspired by how humans comprehend long videos - by first skimming globally and then examining relevant clips for details - we introduce LongVT, an end-to-end agentic framework that enables "Thinking with Long Videos" via interleaved Multimodal Chain-of-Tool-Thought. Specifically, we exploit LMMs' inherent temporal grounding ability as a native video cropping tool to zoom in on a specific video clip and resample finer-grained video frames. This global-to-local reasoning loop continues until answers are grounded in retrieved visual evidence. Given the scarcity of fine-grained question-answering (QA) data for the long video reasoning task, we curate and will release a data suite named VideoSIAH to facilitate both training and evaluation. Specifically, our training dataset consists of 247.9K samples for tool-integrated cold-start supervised fine-tuning, 1.6K samples for agentic reinforcement learning, and 15.4K samples for agentic reinforcement fine-tuning, respectively. Our evaluation benchmark consists of 1,280 QA pairs that are carefully curated through a semi-automatic data pipeline with human-in-the-loop validation. With a meticulously designed three-stage training strategy and extensive empirical validation, LongVT consistently outperforms existing strong baselines across four challenging long-video understanding and reasoning benchmarks. Our codes, data, and model checkpoints are publicly available at this https URL .

顶级标签: multi-modal agents model training
详细标签: video reasoning tool calling long-form video agentic framework multimodal chain-of-thought 或 搜索:

LongVT:通过原生工具调用激励“长视频思维” / LongVT: Incentivizing "Thinking with Long Videos" via Native Tool Calling


1️⃣ 一句话总结

这篇论文提出了一个名为LongVT的智能框架,它模仿人类观看长视频时‘先概览再聚焦细节’的思维过程,通过让大模型自己学会‘裁剪’视频片段来逐步寻找答案,有效解决了现有模型在处理长视频时容易‘胡编乱造’的问题,并在多个评测中表现优异。


📄 打开原文 PDF