MOVA:迈向可扩展且同步的视频-音频生成 / MOVA: Towards Scalable and Synchronized Video-Audio Generation
1️⃣ 一句话总结
这篇论文介绍了一个名为MOVA的开源模型,它能够一次性生成高质量且口型、音效、音乐都与画面内容同步的视听内容,旨在解决现有生成模型常忽略音频或依赖低效级联流程的问题。
Audio is indispensable for real-world video, yet generation models have largely overlooked audio components. Current approaches to producing audio-visual content often rely on cascaded pipelines, which increase cost, accumulate errors, and degrade overall quality. While systems such as Veo 3 and Sora 2 emphasize the value of simultaneous generation, joint multimodal modeling introduces unique challenges in architecture, data, and training. Moreover, the closed-source nature of existing systems limits progress in the field. In this work, we introduce MOVA (MOSS Video and Audio), an open-source model capable of generating high-quality, synchronized audio-visual content, including realistic lip-synced speech, environment-aware sound effects, and content-aligned music. MOVA employs a Mixture-of-Experts (MoE) architecture, with a total of 32B parameters, of which 18B are active during inference. It supports IT2VA (Image-Text to Video-Audio) generation task. By releasing the model weights and code, we aim to advance research and foster a vibrant community of creators. The released codebase features comprehensive support for efficient inference, LoRA fine-tuning, and prompt enhancement.
MOVA:迈向可扩展且同步的视频-音频生成 / MOVA: Towards Scalable and Synchronized Video-Audio Generation
这篇论文介绍了一个名为MOVA的开源模型,它能够一次性生成高质量且口型、音效、音乐都与画面内容同步的视听内容,旨在解决现有生成模型常忽略音频或依赖低效级联流程的问题。
源自 arXiv: 2602.08794