菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-11
📄 Abstract - MotionEdit: Benchmarking and Learning Motion-Centric Image Editing

We introduce MotionEdit, a novel dataset for motion-centric image editing-the task of modifying subject actions and interactions while preserving identity, structure, and physical plausibility. Unlike existing image editing datasets that focus on static appearance changes or contain only sparse, low-quality motion edits, MotionEdit provides high-fidelity image pairs depicting realistic motion transformations extracted and verified from continuous videos. This new task is not only scientifically challenging but also practically significant, powering downstream applications such as frame-controlled video synthesis and animation. To evaluate model performance on the novel task, we introduce MotionEdit-Bench, a benchmark that challenges models on motion-centric edits and measures model performance with generative, discriminative, and preference-based metrics. Benchmark results reveal that motion editing remains highly challenging for existing state-of-the-art diffusion-based editing models. To address this gap, we propose MotionNFT (Motion-guided Negative-aware Fine Tuning), a post-training framework that computes motion alignment rewards based on how well the motion flow between input and model-edited images matches the ground-truth motion, guiding models toward accurate motion transformations. Extensive experiments on FLUX.1 Kontext and Qwen-Image-Edit show that MotionNFT consistently improves editing quality and motion fidelity of both base models on the motion editing task without sacrificing general editing ability, demonstrating its effectiveness.

顶级标签: computer vision model training benchmark
详细标签: motion editing image editing dataset fine-tuning evaluation 或 搜索:

MotionEdit:以动作为中心的图像编辑基准测试与学习 / MotionEdit: Benchmarking and Learning Motion-Centric Image Editing


1️⃣ 一句话总结

这篇论文提出了一个专注于动作编辑的新数据集和基准测试,并开发了一种训练后优化方法,显著提升了现有AI模型在修改图像中人物动作时的自然度和准确性。


源自 arXiv: 2512.10284