EMMA:一种用于多模态理解、生成与编辑的高效统一架构 / EMMA: Efficient Multimodal Understanding, Generation, and Editing with a Unified Architecture
1️⃣ 一句话总结
这篇论文提出了一个名为EMMA的高效统一模型架构,它通过创新的压缩、拼接和网络设计,在一个系统中同时实现了对图像和文本的理解、生成与编辑,并且比现有的大型统一模型更小、更快、效果更好。
We propose EMMA, an efficient and unified architecture for multimodal understanding, generation and editing. Specifically, EMMA primarily consists of 1) An efficient autoencoder with a 32x compression ratio, which significantly reduces the number of tokens required for generation. This also ensures the training balance between understanding and generation tasks by applying the same compression ratio to images. 2) Channel-wise concatenation instead of token-wise concatenation among visual understanding and generation tokens, which further reduces the visual tokens in unified architectures. 3) A shared-and-decoupled network that enables mutual improvements across tasks while meeting the task-specific modeling requirements. 4) A mixture-of-experts mechanism adopted for visual understanding encoder, which substantially improves perceptual capabilities with a few parameters increase. Extensive experiments have shown that EMMA-4B can significantly outperform state-of-the-art unified multimodal approaches (e.g., BAGEL-7B) in both efficiency and performance, while also achieving competitive results compared to recent multimodal understanding and generation experts (e.g., Qwen3-VL and Qwen-Image). We believe that EMMA lays a solid foundation for the future development of unified multimodal architectures.
EMMA:一种用于多模态理解、生成与编辑的高效统一架构 / EMMA: Efficient Multimodal Understanding, Generation, and Editing with a Unified Architecture
这篇论文提出了一个名为EMMA的高效统一模型架构,它通过创新的压缩、拼接和网络设计,在一个系统中同时实现了对图像和文本的理解、生成与编辑,并且比现有的大型统一模型更小、更快、效果更好。
源自 arXiv: 2512.04810