菜单

🤖 系统
📄 Abstract - UniAVGen: Unified Audio and Video Generation with Asymmetric Cross-Modal Interactions

Due to the lack of effective cross-modal modeling, existing open-source audio-video generation methods often exhibit compromised lip synchronization and insufficient semantic consistency. To mitigate these drawbacks, we propose UniAVGen, a unified framework for joint audio and video generation. UniAVGen is anchored in a dual-branch joint synthesis architecture, incorporating two parallel Diffusion Transformers (DiTs) to build a cohesive cross-modal latent space. At its heart lies an Asymmetric Cross-Modal Interaction mechanism, which enables bidirectional, temporally aligned cross-attention, thus ensuring precise spatiotemporal synchronization and semantic consistency. Furthermore, this cross-modal interaction is augmented by a Face-Aware Modulation module, which dynamically prioritizes salient regions in the interaction process. To enhance generative fidelity during inference, we additionally introduce Modality-Aware Classifier-Free Guidance, a novel strategy that explicitly amplifies cross-modal correlation signals. Notably, UniAVGen's robust joint synthesis design enables seamless unification of pivotal audio-video tasks within a single model, such as joint audio-video generation and continuation, video-to-audio dubbing, and audio-driven video synthesis. Comprehensive experiments validate that, with far fewer training samples (1.3M vs. 30.1M), UniAVGen delivers overall advantages in audio-video synchronization, timbre consistency, and emotion consistency.

顶级标签: multi-modal video generation aigc
详细标签: audio-video generation diffusion transformers cross-modal interaction lip synchronization classifier-free guidance 或 搜索:

📄 论文总结

UniAVGen:基于非对称跨模态交互的统一音视频生成框架 / UniAVGen: Unified Audio and Video Generation with Asymmetric Cross-Modal Interactions


1️⃣ 一句话总结

这篇论文提出了一个名为UniAVGen的统一音视频生成框架,通过创新的跨模态交互机制有效解决了现有方法在口型同步和语义一致性上的不足,并能在单一模型中实现多种音视频生成任务,同时大幅减少了训练数据需求。


📄 打开原文 PDF