Klear:统一的多任务音视频联合生成 / Klear: Unified Multi-Task Audio-Video Joint Generation
1️⃣ 一句话总结
这篇论文提出了一个名为Klear的统一模型,通过创新的架构设计、训练策略和高质量数据集构建,解决了音视频生成中常见的不同步、口型不匹配等问题,能够高质量地生成同步且符合指令的音视频内容。
Audio-video joint generation has progressed rapidly, yet substantial challenges still remain. Non-commercial approaches still suffer audio-visual asynchrony, poor lip-speech alignment, and unimodal degradation, which can be stemmed from weak audio-visual correspondence modeling, limited generalization, and scarce high-quality dense-caption data. To address these issues, we introduce Klear and delve into three axes--model architecture, training strategy, and data curation. Architecturally, we adopt a single-tower design with unified DiT blocks and an Omni-Full Attention mechanism, achieving tight audio-visual alignment and strong scalability. Training-wise, we adopt a progressive multitask regime--random modality masking to joint optimization across tasks, and a multistage curriculum, yielding robust representations, strengthening A-V aligned world knowledge, and preventing unimodal collapse. For datasets, we present the first large-scale audio-video dataset with dense captions, and introduce a novel automated data-construction pipeline which annotates and filters millions of diverse, high-quality, strictly aligned audio-video-caption triplets. Building on this, Klear scales to large datasets, delivering high-fidelity, semantically and temporally aligned, instruction-following generation in both joint and unimodal settings while generalizing robustly to out-of-distribution scenarios. Across tasks, it substantially outperforms prior methods by a large margin and achieves performance comparable to Veo 3, offering a unified, scalable path toward next-generation audio-video synthesis.
Klear:统一的多任务音视频联合生成 / Klear: Unified Multi-Task Audio-Video Joint Generation
这篇论文提出了一个名为Klear的统一模型,通过创新的架构设计、训练策略和高质量数据集构建,解决了音视频生成中常见的不同步、口型不匹配等问题,能够高质量地生成同步且符合指令的音视频内容。
源自 arXiv: 2601.04151