菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-20
📄 Abstract - OmniTransfer: All-in-one Framework for Spatio-temporal Video Transfer

Videos convey richer information than images or text, capturing both spatial and temporal dynamics. However, most existing video customization methods rely on reference images or task-specific temporal priors, failing to fully exploit the rich spatio-temporal information inherent in videos, thereby limiting flexibility and generalization in video generation. To address these limitations, we propose OmniTransfer, a unified framework for spatio-temporal video transfer. It leverages multi-view information across frames to enhance appearance consistency and exploits temporal cues to enable fine-grained temporal control. To unify various video transfer tasks, OmniTransfer incorporates three key designs: Task-aware Positional Bias that adaptively leverages reference video information to improve temporal alignment or appearance consistency; Reference-decoupled Causal Learning separating reference and target branches to enable precise reference transfer while improving efficiency; and Task-adaptive Multimodal Alignment using multimodal semantic guidance to dynamically distinguish and tackle different tasks. Extensive experiments show that OmniTransfer outperforms existing methods in appearance (ID and style) and temporal transfer (camera movement and video effects), while matching pose-guided methods in motion transfer without using pose, establishing a new paradigm for flexible, high-fidelity video generation.

顶级标签: video generation multi-modal model training
详细标签: video transfer spatio-temporal unified framework temporal alignment multimodal guidance 或 搜索:

OmniTransfer:时空视频迁移的一体化框架 / OmniTransfer: All-in-one Framework for Spatio-temporal Video Transfer


1️⃣ 一句话总结

这篇论文提出了一个名为OmniTransfer的统一框架,它能够利用视频中的时空信息,灵活高效地完成外观(如身份、风格)和时序(如摄像机运动、特效)等多种视频迁移任务,无需依赖特定任务先验知识即可实现高质量的视频生成。

源自 arXiv: 2601.14250