菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-27
📄 Abstract - DreamOmni3: Scribble-based Editing and Generation

Recently unified generation and editing models have achieved remarkable success with their impressive performance. These models rely mainly on text prompts for instruction-based editing and generation, but language often fails to capture users intended edit locations and fine-grained visual details. To this end, we propose two tasks: scribble-based editing and generation, that enables more flexible creation on graphical user interface (GUI) combining user textual, images, and freehand sketches. We introduce DreamOmni3, tackling two challenges: data creation and framework design. Our data synthesis pipeline includes two parts: scribble-based editing and generation. For scribble-based editing, we define four tasks: scribble and instruction-based editing, scribble and multimodal instruction-based editing, image fusion, and doodle editing. Based on DreamOmni2 dataset, we extract editable regions and overlay hand-drawn boxes, circles, doodles or cropped image to construct training data. For scribble-based generation, we define three tasks: scribble and instruction-based generation, scribble and multimodal instruction-based generation, and doodle generation, following similar data creation pipelines. For the framework, instead of using binary masks, which struggle with complex edits involving multiple scribbles, images, and instructions, we propose a joint input scheme that feeds both the original and scribbled source images into the model, using different colors to distinguish regions and simplify processing. By applying the same index and position encodings to both images, the model can precisely localize scribbled regions while maintaining accurate editing. Finally, we establish comprehensive benchmarks for these tasks to promote further research. Experimental results demonstrate that DreamOmni3 achieves outstanding performance, and models and code will be publicly released.

顶级标签: computer vision multi-modal model training
详细标签: scribble-based editing image generation multimodal instruction data synthesis benchmark 或 搜索:

DreamOmni3:基于涂鸦的编辑与生成 / DreamOmni3: Scribble-based Editing and Generation


1️⃣ 一句话总结

这篇论文提出了一个名为DreamOmni3的新模型,它允许用户通过简单的涂鸦、文字和图片来灵活地编辑或生成图像,解决了传统方法难以精确定位和表达细节的问题。

源自 arXiv: 2512.22525