菜单

🤖 系统
📄 Abstract - Does Hearing Help Seeing? Investigating Audio-Video Joint Denoising for Video Generation

Recent audio-video generative systems suggest that coupling modalities benefits not only audio-video synchrony but also the video modality itself. We pose a fundamental question: Does audio-video joint denoising training improve video generation, even when we only care about video quality? To study this, we introduce a parameter-efficient Audio-Video Full DiT (AVFullDiT) architecture that leverages pre-trained text-to-video (T2V) and text-to-audio (T2A) modules for joint denoising. We train (i) a T2AV model with AVFullDiT and (ii) a T2V-only counterpart under identical settings. Our results provide the first systematic evidence that audio-video joint denoising can deliver more than synchrony. We observe consistent improvements on challenging subsets featuring large and object contact motions. We hypothesize that predicting audio acts as a privileged signal, encouraging the model to internalize causal relationships between visual events and their acoustic consequences (e.g., collision $\times$ impact sound), which in turn regularizes video dynamics. Our findings suggest that cross-modal co-training is a promising approach to developing stronger, more physically grounded world models. Code and dataset will be made publicly available.

顶级标签: multi-modal video generation model training
详细标签: audio-video joint denoising diffusion transformers privileged signal cross-modal co-training video quality 或 搜索:

听觉有助于视觉吗?探究音频-视频联合去噪对视频生成的影响 / Does Hearing Help Seeing? Investigating Audio-Video Joint Denoising for Video Generation


1️⃣ 一句话总结

这篇论文通过实验证明,在视频生成训练中同时加入音频去噪任务,即使最终只关注视频质量,也能通过让模型学习视觉事件与声音之间的因果关系,从而生成动态更真实、物理规律更准确的视频。


📄 打开原文 PDF