菜单

🤖 系统
📄 Abstract - DualCamCtrl: Dual-Branch Diffusion Model for Geometry-Aware Camera-Controlled Video Generation

This paper presents DualCamCtrl, a novel end-to-end diffusion model for camera-controlled video generation. Recent works have advanced this field by representing camera poses as ray-based conditions, yet they often lack sufficient scene understanding and geometric awareness. DualCamCtrl specifically targets this limitation by introducing a dual-branch framework that mutually generates camera-consistent RGB and depth sequences. To harmonize these two modalities, we further propose the Semantic Guided Mutual Alignment (SIGMA) mechanism, which performs RGB-depth fusion in a semantics-guided and mutually reinforced manner. These designs collectively enable DualCamCtrl to better disentangle appearance and geometry modeling, generating videos that more faithfully adhere to the specified camera trajectories. Additionally, we analyze and reveal the distinct influence of depth and camera poses across denoising stages and further demonstrate that early and late stages play complementary roles in forming global structure and refining local details. Extensive experiments demonstrate that DualCamCtrl achieves more consistent camera-controlled video generation, with over 40\% reduction in camera motion errors compared with prior methods. Our project page: this https URL

顶级标签: video generation computer vision multi-modal
详细标签: camera control diffusion model rgb-depth fusion video synthesis geometry-aware generation 或 搜索:

DualCamCtrl:用于几何感知相机控制视频生成的双分支扩散模型 / DualCamCtrl: Dual-Branch Diffusion Model for Geometry-Aware Camera-Controlled Video Generation


1️⃣ 一句话总结

这篇论文提出了一个名为DualCamCtrl的新模型,它通过同时生成颜色和深度视频的双分支框架,并利用语义引导的融合机制,显著提升了根据指定相机轨迹生成视频的准确性和几何一致性,比之前的方法减少了超过40%的相机运动误差。


📄 打开原文 PDF