菜单

🤖 系统
📄 Abstract - REASONEDIT: Towards Reasoning-Enhanced Image Editing Models

Recent advances in image editing models have shown remarkable progress. A common architectural design couples a multimodal large language model (MLLM) encoder with a diffusion decoder, as seen in systems such as Step1X-Edit and Qwen-Image-Edit, where the MLLM encodes both the reference image and the instruction but remains frozen during training. In this work, we demonstrate that unlocking the reasoning capabilities of MLLM can further push the boundaries of editing models. Specifically, we explore two reasoning mechanisms, thinking and reflection, which enhance instruction understanding and editing accuracy. Based on that, our proposed framework enables image editing in a thinking-editing-reflection loop: the thinking mechanism leverages the world knowledge of MLLM to interpret abstract instructions, while the reflection reviews editing results, automatically corrects unintended manipulations, and identifies the stopping round. Extensive experiments demonstrate that our reasoning approach achieves significant performance gains, with improvements of ImgEdit (+4.3%), GEdit (+4.7%), and Kris (+8.2%) when initializing our DiT from the Step1X-Edit (ReasonEdit-S), and also outperforms previous open-source methods on both GEdit and Kris when integrated with Qwen-Image-Edit (ReasonEdit-Q).

顶级标签: multi-modal model training computer vision
详细标签: image editing reasoning multimodal llm diffusion models instruction following 或 搜索:

REASONEDIT:迈向推理增强的图像编辑模型 / REASONEDIT: Towards Reasoning-Enhanced Image Editing Models


1️⃣ 一句话总结

这篇论文提出了一种名为ReasonEdit的新框架,通过解锁大型多模态语言模型的推理能力,让AI在编辑图片时能像人一样先思考指令、再检查结果并自动修正错误,从而显著提升了图像编辑的准确性和效果。


📄 打开原文 PDF