菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-29
📄 Abstract - Bridging Your Imagination with Audio-Video Generation via a Unified Director

Existing AI-driven video creation systems typically treat script drafting and key-shot design as two disjoint tasks: the former relies on large language models, while the latter depends on image generation models. We argue that these two tasks should be unified within a single framework, as logical reasoning and imaginative thinking are both fundamental qualities of a film director. In this work, we propose UniMAGE, a unified director model that bridges user prompts with well-structured scripts, thereby empowering non-experts to produce long-context, multi-shot films by leveraging existing audio-video generation models. To achieve this, we employ the Mixture-of-Transformers architecture that unifies text and image generation. To further enhance narrative logic and keyframe consistency, we introduce a ``first interleaving, then disentangling'' training paradigm. Specifically, we first perform Interleaved Concept Learning, which utilizes interleaved text-image data to foster the model's deeper understanding and imaginative interpretation of scripts. We then conduct Disentangled Expert Learning, which decouples script writing from keyframe generation, enabling greater flexibility and creativity in storytelling. Extensive experiments demonstrate that UniMAGE achieves state-of-the-art performance among open-source models, generating logically coherent video scripts and visually consistent keyframe images.

顶级标签: multi-modal aigc video generation
详细标签: unified generation script-to-video mixture-of-transformers keyframe consistency interleaved learning 或 搜索:

通过统一导演模型连接想象与音视频生成 / Bridging Your Imagination with Audio-Video Generation via a Unified Director


1️⃣ 一句话总结

这篇论文提出了一个名为UniMAGE的统一导演模型,它能够将用户的想法自动转化为逻辑连贯的剧本和视觉一致的关键画面,从而帮助普通人轻松制作出多镜头、长内容的电影。

源自 arXiv: 2512.23222