TUNA:为原生统一多模态模型驯服统一的视觉表示 / TUNA: Taming Unified Visual Representations for Native Unified Multimodal Models
1️⃣ 一句话总结
这篇论文提出了一个名为TUNA的原生统一多模态模型,它通过构建一个统一的连续视觉表示空间,让同一个模型既能理解图像和视频,也能生成和编辑它们,并且在各项任务上都取得了领先的性能。
Unified multimodal models (UMMs) aim to jointly perform multimodal understanding and generation within a single framework. We present TUNA, a native UMM that builds a unified continuous visual representation by cascading a VAE encoder with a representation encoder. This unified representation space allows end-to-end processing of images and videos for both understanding and generation tasks. Compared to prior UMMs with decoupled representations, TUNA's unified visual space avoids representation format mismatches introduced by separate encoders, outperforming decoupled alternatives in both understanding and generation. Moreover, we observe that stronger pretrained representation encoders consistently yield better performance across all multimodal tasks, highlighting the importance of the representation encoder. Finally, in this unified setting, jointly training on both understanding and generation data allows the two tasks to benefit from each other rather than interfere. Our extensive experiments on multimodal understanding and generation benchmarks show that TUNA achieves state-of-the-art results in image and video understanding, image and video generation, and image editing, demonstrating the effectiveness and scalability of its unified representation design.
TUNA:为原生统一多模态模型驯服统一的视觉表示 / TUNA: Taming Unified Visual Representations for Native Unified Multimodal Models
这篇论文提出了一个名为TUNA的原生统一多模态模型,它通过构建一个统一的连续视觉表示空间,让同一个模型既能理解图像和视频,也能生成和编辑它们,并且在各项任务上都取得了领先的性能。