菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-08
📄 Abstract - Unified Video Editing with Temporal Reasoner

Existing video editing methods face a critical trade-off: expert models offer precision but rely on task-specific priors like masks, hindering unification; conversely, unified temporal in-context learning models are mask-free but lack explicit spatial cues, leading to weak instruction-to-region mapping and imprecise localization. To resolve this conflict, we propose VideoCoF, a novel Chain-of-Frames approach inspired by Chain-of-Thought reasoning. VideoCoF enforces a ``see, reason, then edit" procedure by compelling the video diffusion model to first predict reasoning tokens (edit-region latents) before generating the target video tokens. This explicit reasoning step removes the need for user-provided masks while achieving precise instruction-to-region alignment and fine-grained video editing. Furthermore, we introduce a RoPE alignment strategy that leverages these reasoning tokens to ensure motion alignment and enable length extrapolation beyond the training duration. We demonstrate that with a minimal data cost of only 50k video pairs, VideoCoF achieves state-of-the-art performance on VideoCoF-Bench, validating the efficiency and effectiveness of our approach. Our code, weight, data are available at this https URL.

顶级标签: video generation model training multi-modal
详细标签: video editing diffusion models temporal reasoning chain-of-frames instruction-to-region alignment 或 搜索:

基于时序推理器的统一视频编辑 / Unified Video Editing with Temporal Reasoner


1️⃣ 一句话总结

这篇论文提出了一种名为VideoCoF的新方法,它通过让AI模型先‘观察并推理’视频中需要编辑的区域,再进行自动编辑,从而无需用户手动标记就能实现精确、统一的视频编辑,并且仅需少量数据进行训练。


源自 arXiv: 2512.07469