Video-CoE:通过事件链强化视频事件预测 / Video-CoE: Reinforcing Video Event Prediction via Chain of Events
1️⃣ 一句话总结
这篇论文针对现有多模态大模型在预测视频未来事件时存在的逻辑推理和视觉信息利用不足的问题,提出了一种‘事件链’的新方法,通过构建时序事件链来引导模型关注视频内容与未来事件之间的逻辑联系,从而显著提升了视频事件预测的准确率,并在公开基准测试中取得了领先效果。
Despite advances in the application of MLLMs for various video tasks, video event prediction (VEP) remains relatively underexplored. VEP requires the model to perform fine-grained temporal modeling of videos and establish logical relationships between videos and future events, which current MLLMs still struggle with. In this work, we first present a comprehensive evaluation of current leading MLLMs on the VEP task, revealing the reasons behind their inaccurate predictions, including lack of logical reasoning ability for future events prediction and insufficient utilization of visual information. To address these challenges, we propose \textbf{C}hain \textbf{o}f \textbf{E}vents (\textbf{CoE}) paradigm, which constructs temporal event chains to implicitly enforce MLLM focusing on the visual content and the logical connections between videos and future events, incentivizing model's reasoning capability with multiple training protocols. Experimental results on public benchmarks demonstrate that our method outperforms both leading open-source and commercial MLLMs, establishing a new state-of-the-art on the VEP task. Codes and models will be released soon.
Video-CoE:通过事件链强化视频事件预测 / Video-CoE: Reinforcing Video Event Prediction via Chain of Events
这篇论文针对现有多模态大模型在预测视频未来事件时存在的逻辑推理和视觉信息利用不足的问题,提出了一种‘事件链’的新方法,通过构建时序事件链来引导模型关注视频内容与未来事件之间的逻辑联系,从而显著提升了视频事件预测的准确率,并在公开基准测试中取得了领先效果。
源自 arXiv: 2603.14935