基于掩码逻辑微调提示引导的视觉自回归模型图像编辑 / Prompt-Guided Image Editing with Masked Logit Nudging in Visual Autoregressive Models
1️⃣ 一句话总结
这篇论文提出了一种名为‘掩码逻辑微调’的新方法,让AI模型能够根据文字指令精准地编辑图片中指定的部分,同时完美保留图片中不需要修改的区域,并且编辑速度比当前主流方法快得多。
We address the problem of prompt-guided image editing in visual autoregressive models. Given a source image and a target text prompt, we aim to modify the source image according to the target prompt, while preserving all regions which are unrelated to the requested edit. To this end, we present Masked Logit Nudging, which uses the source image token maps to introduce a guidance step that aligns the model's predictions under the target prompt with these source token maps. Specifically, we convert the fixed source encodings into logits using the VAR encoding, nudging the model's predicted logits towards the targets along a semantic trajectory defined by the source-target prompts. Edits are applied only within spatial masks obtained through a dedicated masking scheme that leverages cross-attention differences between the source and edited prompts. Then, we introduce a refinement to correct quantization errors and improve reconstruction quality. Our approach achieves the best image editing performance on the PIE benchmark at 512px and 1024px resolutions. Beyond editing, our method delivers faithful reconstructions and outperforms previous methods on COCO at 512px and OpenImages at 1024px. Overall, our method outperforms VAR-related approaches and achieves comparable or even better performance than diffusion models, while being much faster. Code is available at 'this https URL.
基于掩码逻辑微调提示引导的视觉自回归模型图像编辑 / Prompt-Guided Image Editing with Masked Logit Nudging in Visual Autoregressive Models
这篇论文提出了一种名为‘掩码逻辑微调’的新方法,让AI模型能够根据文字指令精准地编辑图片中指定的部分,同时完美保留图片中不需要修改的区域,并且编辑速度比当前主流方法快得多。
源自 arXiv: 2604.14591