菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-04
📄 Abstract - InEdit-Bench: Benchmarking Intermediate Logical Pathways for Intelligent Image Editing Models

Multimodal generative models have made significant strides in image editing, demonstrating impressive performance on a variety of static tasks. However, their proficiency typically does not extend to complex scenarios requiring dynamic reasoning, leaving them ill-equipped to model the coherent, intermediate logical pathways that constitute a multi-step evolution from an initial state to a final one. This capacity is crucial for unlocking a deeper level of procedural and causal understanding in visual manipulation. To systematically measure this critical limitation, we introduce InEdit-Bench, the first evaluation benchmark dedicated to reasoning over intermediate pathways in image editing. InEdit-Bench comprises meticulously annotated test cases covering four fundamental task categories: state transition, dynamic process, temporal sequence, and scientific simulation. Additionally, to enable fine-grained evaluation, we propose a set of assessment criteria to evaluate the logical coherence and visual naturalness of the generated pathways, as well as the model's fidelity to specified path constraints. Our comprehensive evaluation of 14 representative image editing models on InEdit-Bench reveals significant and widespread shortcomings in this domain. By providing a standardized and challenging benchmark, we aim for InEdit-Bench to catalyze research and steer development towards more dynamic, reason-aware, and intelligent multimodal generative models.

顶级标签: multi-modal model evaluation benchmark
详细标签: image editing logical reasoning evaluation benchmark multimodal models dynamic processes 或 搜索:

InEdit-Bench:用于智能图像编辑模型的中间逻辑路径基准测试 / InEdit-Bench: Benchmarking Intermediate Logical Pathways for Intelligent Image Editing Models


1️⃣ 一句话总结

这篇论文提出了首个用于评估图像编辑模型在复杂多步骤任务中动态推理能力的基准测试工具InEdit-Bench,发现当前主流模型在此方面存在普遍不足,旨在推动开发更具逻辑理解和推理能力的智能图像生成模型。

源自 arXiv: 2603.03657