Tuna-2:像素嵌入在多模态理解与生成中超越视觉编码器 / Tuna-2: Pixel Embeddings Beat Vision Encoders for Multimodal Understanding and Generation
1️⃣ 一句话总结
本文提出Tuna-2模型,通过直接使用像素嵌入而非预训练视觉编码器来处理图像,简化了多模态模型架构,同时在理解和生成任务上均达到顶尖性能,表明端到端的像素空间学习比传统的编码器方法更具优势。
Unified multimodal models typically rely on pretrained vision encoders and use separate visual representations for understanding and generation, creating misalignment between the two tasks and preventing fully end-to-end optimization from raw pixels. We introduce Tuna-2, a native unified multimodal model that performs visual understanding and generation directly based on pixel embeddings. Tuna-2 drastically simplifies the model architecture by employing simple patch embedding layers to encode visual input, completely discarding the modular vision encoder designs such as the VAE or the representation encoder. Experiments show that Tuna-2 achieves state-of-the-art performance in multimodal benchmarks, demonstrating that unified pixel-space modelling can fully compete with latent-space approaches for high-quality image generation. Moreover, while the encoder-based variant converges faster in early pretraining, Tuna-2's encoder-free design achieves stronger multimodal understanding at scale, particularly on tasks requiring fine-grained visual perception. These results show that pretrained vision encoders are not necessary for multimodal modelling, and end-to-end pixel-space learning offers a scalable path toward stronger visual representations for both generation and perception.
Tuna-2:像素嵌入在多模态理解与生成中超越视觉编码器 / Tuna-2: Pixel Embeddings Beat Vision Encoders for Multimodal Understanding and Generation
本文提出Tuna-2模型,通过直接使用像素嵌入而非预训练视觉编码器来处理图像,简化了多模态模型架构,同时在理解和生成任务上均达到顶尖性能,表明端到端的像素空间学习比传统的编码器方法更具优势。
源自 arXiv: 2604.24763