菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-14
📄 Abstract - From Plans to Pixels: Learning to Plan and Orchestrate for Open-Ended Image Editing

Modern image editing models produce realistic results but struggle with abstract, multi step instructions (e.g., ``make this advertisement more vegetarian-friendly''). Prior agent based methods decompose such tasks but rely on handcrafted pipelines or teacher imitation, limiting flexibility and decoupling learning from actual editing outcomes. We propose an experiential framework for long-horizon image editing, where a planner generates structured atomic decompositions and an orchestrator selects tools and regions to execute each step. A vision language judge provides outcome-based rewards for instruction adherence and visual quality. The orchestrator is trained to maximize these rewards, and successful trajectories are used to refine the planner. By tightly coupling planning with reward driven execution, our approach yields more coherent and reliable edits than single-step or rule-based multistep baselines.

顶级标签: computer vision multi-modal agents
详细标签: image editing planning reward learning multi-step visual quality 或 搜索:

从规划到像素:学习规划与编排以实现开放式图像编辑 / From Plans to Pixels: Learning to Plan and Orchestrate for Open-Ended Image Editing


1️⃣ 一句话总结

本文提出了一种让AI通过先制定分步计划、再逐步执行工具操作来应对复杂、模糊的长期图像编辑任务(如“让广告更素食友好”)的新方法,并通过视觉语言模型对每一步的结果进行奖励反馈,从而自我改进规划与执行能力,最终生成比单步或固定流程方法更连贯、可靠的编辑效果。

源自 arXiv: 2605.15181