VIBE:基于视觉指令的编辑器 / VIBE: Visual Instruction Based Editor
1️⃣ 一句话总结
这篇论文提出了一个名为VIBE的高效图像编辑系统,它通过结合一个较小的视觉语言模型和一个轻量级扩散模型,在保持高质量编辑效果的同时,大幅降低了计算成本和内存需求,使其能在普通硬件上快速运行。
Instruction-based image editing is among the fastest developing areas in generative AI. Over the past year, the field has reached a new level, with dozens of open-source models released alongside highly capable commercial systems. However, only a limited number of open-source approaches currently achieve real-world quality. In addition, diffusion backbones, the dominant choice for these pipelines, are often large and computationally expensive for many deployments and research settings, with widely used variants typically containing 6B to 20B parameters. This paper presents a compact, high-throughput instruction-based image editing pipeline that uses a modern 2B-parameter Qwen3-VL model to guide the editing process and the 1.6B-parameter diffusion model Sana1.5 for image generation. Our design decisions across architecture, data processing, training configuration, and evaluation target low-cost inference and strict source consistency while maintaining high quality across the major edit categories feasible at this scale. Evaluated on the ImgEdit and GEdit benchmarks, the proposed method matches or exceeds the performance of substantially heavier baselines, including models with several times as many parameters and higher inference cost, and is particularly strong on edits that require preserving the input image, such as an attribute adjustment, object removal, background edits, and targeted replacement. The model fits within 24 GB of GPU memory and generates edited images at up to 2K resolution in approximately 4 seconds on an NVIDIA H100 in BF16, without additional inference optimizations or distillation.
VIBE:基于视觉指令的编辑器 / VIBE: Visual Instruction Based Editor
这篇论文提出了一个名为VIBE的高效图像编辑系统,它通过结合一个较小的视觉语言模型和一个轻量级扩散模型,在保持高质量编辑效果的同时,大幅降低了计算成本和内存需求,使其能在普通硬件上快速运行。
源自 arXiv: 2601.02242