菜单

🤖 系统
📄 Abstract - Thinking with Video: Video Generation as a Promising Multimodal Reasoning Paradigm

"Thinking with Text" and "Thinking with Images" paradigm significantly improve the reasoning ability of large language models (LLMs) and Vision Language Models (VLMs). However, these paradigms have inherent limitations. (1) Images capture only single moments and fail to represent dynamic processes or continuous changes, and (2) The separation of text and vision as distinct modalities, hindering unified multimodal understanding and generation. To overcome these limitations, we introduce "Thinking with Video", a new paradigm that leverages video generation models, such as Sora-2, to bridge visual and textual reasoning in a unified temporal framework. To support this exploration, we developed the Video Thinking Benchmark (VideoThinkBench). VideoThinkBench encompasses two task categories: (1) vision-centric tasks (e.g., Eyeballing Puzzles), and (2) text-centric tasks (e.g., subsets of GSM8K, MMMU). Our evaluation establishes Sora-2 as a capable reasoner. On vision-centric tasks, Sora-2 is generally comparable to state-of-the-art (SOTA) VLMs, and even surpasses VLMs on several tasks, such as Eyeballing Games. On text-centric tasks, Sora-2 achieves 92% accuracy on MATH, and 75.53% accuracy on MMMU. Furthermore, we systematically analyse the source of these abilities. We also find that self-consistency and in-context learning can improve Sora-2's performance. In summary, our findings demonstrate that the video generation model is the potential unified multimodal understanding and generation model, positions "thinking with video" as a unified multimodal reasoning paradigm.

顶级标签: video generation multi-modal model evaluation
详细标签: multimodal reasoning video benchmark sora-2 temporal understanding visual reasoning 或 搜索:

📄 论文总结

用视频思考:视频生成作为一种有前景的多模态推理范式 / Thinking with Video: Video Generation as a Promising Multimodal Reasoning Paradigm


1️⃣ 一句话总结

这篇论文提出了一种名为‘用视频思考’的新方法,利用视频生成模型(如Sora-2)将视觉和文本推理统一起来,克服了传统图像和文本分离的局限性,并在多项任务中展现出强大的理解和生成能力。


📄 打开原文 PDF