菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-09
📄 Abstract - ImVideoEdit: Image-learning Video Editing via 2D Spatial Difference Attention Blocks

Current video editing models often rely on expensive paired video data, which limits their practical scalability. In essence, most video editing tasks can be formulated as a decoupled spatiotemporal process, where the temporal dynamics of the pretrained model are preserved while spatial content is selectively and precisely modified. Based on this insight, we propose ImVideoEdit, an efficient framework that learns video editing capabilities entirely from image pairs. By freezing the pre-trained 3D attention modules and treating images as single-frame videos, we decouple the 2D spatial learning process to help preserve the original temporal dynamics. The core of our approach is a Predict-Update Spatial Difference Attention module that progressively extracts and injects spatial differences. Rather than relying on rigid external masks, we incorporate a Text-Guided Dynamic Semantic Gating mechanism for adaptive and implicit text-driven modifications. Despite training on only 13K image pairs for 5 epochs with exceptionally low computational overhead, ImVideoEdit achieves editing fidelity and temporal consistency comparable to larger models trained on extensive video datasets.

顶级标签: video generation model training computer vision
详细标签: video editing attention mechanism spatial learning image-to-video temporal consistency 或 搜索:

ImVideoEdit:通过2D空间差异注意力块实现基于图像学习的视频编辑 / ImVideoEdit: Image-learning Video Editing via 2D Spatial Difference Attention Blocks


1️⃣ 一句话总结

这篇论文提出了一种名为ImVideoEdit的高效视频编辑框架,它仅需使用成对的图像数据进行训练,就能在保持视频原有动态连贯性的同时,实现对画面内容的精准、自适应修改,大大降低了对昂贵视频配对数据的依赖和计算成本。

源自 arXiv: 2604.07958