菜单

🤖 系统
📄 Abstract - Time-to-Move: Training-Free Motion Controlled Video Generation via Dual-Clock Denoising

Diffusion-based video generation can create realistic videos, yet existing image- and text-based conditioning fails to offer precise motion control. Prior methods for motion-conditioned synthesis typically require model-specific fine-tuning, which is computationally expensive and restrictive. We introduce Time-to-Move (TTM), a training-free, plug-and-play framework for motion- and appearance-controlled video generation with image-to-video (I2V) diffusion models. Our key insight is to use crude reference animations obtained through user-friendly manipulations such as cut-and-drag or depth-based reprojection. Motivated by SDEdit's use of coarse layout cues for image editing, we treat the crude animations as coarse motion cues and adapt the mechanism to the video domain. We preserve appearance with image conditioning and introduce dual-clock denoising, a region-dependent strategy that enforces strong alignment in motion-specified regions while allowing flexibility elsewhere, balancing fidelity to user intent with natural dynamics. This lightweight modification of the sampling process incurs no additional training or runtime cost and is compatible with any backbone. Extensive experiments on object and camera motion benchmarks show that TTM matches or exceeds existing training-based baselines in realism and motion control. Beyond this, TTM introduces a unique capability: precise appearance control through pixel-level conditioning, exceeding the limits of text-only prompting. Visit our project page for video examples and code: this https URL.

顶级标签: video generation computer vision model training
详细标签: motion control video diffusion training-free dual-clock denoising image-to-video 或 搜索:

📄 论文总结

即刻移动:通过双时钟去噪实现无需训练的运动控制视频生成 / Time-to-Move: Training-Free Motion Controlled Video Generation via Dual-Clock Denoising


1️⃣ 一句话总结

这篇论文提出了一种无需额外训练、即插即用的视频生成框架,通过用户简单的动画草图和双时钟去噪技术,实现了对视频中物体运动和外观的精确控制,同时保持自然动态效果。


📄 打开原文 PDF