📄 论文总结
MMaDA-并行:用于思维感知编辑与生成的多模态大扩散语言模型 / MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation
1️⃣ 一句话总结
这项研究提出了一种并行多模态扩散框架,通过让文本和图像在生成过程中持续双向交互,有效解决了传统序列模型因错误传播导致的图文不一致问题,显著提升了思维感知图像合成的质量。
While thinking-aware generation aims to improve performance on complex tasks, we identify a critical failure mode where existing sequential, autoregressive approaches can paradoxically degrade performance due to error propagation. To systematically analyze this issue, we propose ParaBench, a new benchmark designed to evaluate both text and image output modalities. Our analysis using ParaBench reveals that this performance degradation is strongly correlated with poor alignment between the generated reasoning and the final image. To resolve this, we propose a parallel multimodal diffusion framework, MMaDA-Parallel, that enables continuous, bidirectional interaction between text and images throughout the entire denoising trajectory. MMaDA-Parallel is trained with supervised finetuning and then further optimized by Parallel Reinforcement Learning (ParaRL), a novel strategy that applies semantic rewards along the trajectory to enforce cross-modal consistency. Experiments validate that our model significantly improves cross-modal alignment and semantic consistency, achieving a 6.9\% improvement in Output Alignment on ParaBench compared to the state-of-the-art model, Bagel, establishing a more robust paradigm for thinking-aware image synthesis. Our code is open-sourced at this https URL
MMaDA-并行:用于思维感知编辑与生成的多模态大扩散语言模型 / MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation
这项研究提出了一种并行多模态扩散框架,通过让文本和图像在生成过程中持续双向交互,有效解决了传统序列模型因错误传播导致的图文不一致问题,显著提升了思维感知图像合成的质量。