架构解耦并非统一多模态模型的全部答案 / Architecture Decoupling Is Not All You Need For Unified Multimodal Model
1️⃣ 一句话总结
这篇论文提出了一种名为‘注意力交互对齐’的新方法,它通过直接学习任务特定的多模态交互模式,在不拆分模型结构的情况下,有效缓解了统一多模态模型中理解与生成任务的内在冲突,从而同时提升了模型的生成和理解能力。
Unified multimodal models for image generation and understanding represent a significant step toward AGI and have attracted widespread attention from researchers. The main challenge of this task lies in the difficulty in establishing an optimal training paradigm due to inherent conflicting targets in understanding and generation tasks. To alleviate these conflicts and pursue higher performance, many researchers adopt varying degrees of model decoupling (e.g., Double image encoders, MOE/MOT architecture, or frozen MLLM). However, excessive model decoupling can lead to the loss of interleave generation ability, undermining the original intent of unified models. In this work, we aim to explore how to mitigate task conflicts without resorting to model decoupling. Firstly, we analyze why decoupling alleviates conflicts by studying the cross-modal attention behavior of models. We observe that model decoupling essentially drives models toward task-specific multimodal interaction patterns, as seen in Qwen-VL and HunyuanImage, and that the more thorough the decoupling, the more consistent the behavior becomes. Motivated by this observation, we propose Attention Interaction Alignment (AIA) loss, which explicitly learns Task-Specific multimodal interaction patterns during training. To demonstrate the generalizability of our AIA loss, we apply it to Emu3 and Janus-Pro during SFT and post-training stage respectively. Without bells and whistles, AIA not only refines cross-modal attention patterns, but also boosts both generation and understanding performance.
架构解耦并非统一多模态模型的全部答案 / Architecture Decoupling Is Not All You Need For Unified Multimodal Model
这篇论文提出了一种名为‘注意力交互对齐’的新方法,它通过直接学习任务特定的多模态交互模式,在不拆分模型结构的情况下,有效缓解了统一多模态模型中理解与生成任务的内在冲突,从而同时提升了模型的生成和理解能力。