FlowSlider:通过保真度-导向分解实现无需训练的图像连续编辑 / FlowSlider: Training-Free Continuous Image Editing via Fidelity-Steering Decomposition
1️⃣ 一句话总结
这篇论文提出了一种无需额外训练、通过滑块就能平滑控制图像编辑强度的方法,它巧妙地将编辑过程分解为保持原图特征的‘保真项’和驱动内容变化的‘导向项’,从而实现既稳定又高质量的连续编辑效果。
Continuous image editing aims to provide slider-style control of edit strength while preserving source-image fidelity and maintaining a consistent edit direction. Existing learning-based slider methods typically rely on auxiliary modules trained with synthetic or proxy supervision. This introduces additional training overhead and couples slider behavior to the training distribution, which can reduce reliability under distribution shifts in edits or domains. We propose \textit{FlowSlider}, a training-free method for continuous editing in Rectified Flow that requires no post-training. \textit{FlowSlider} decomposes FlowEdit's update into (i) a fidelity term, which acts as a source-conditioned stabilizer that preserves identity and structure, and (ii) a steering term that drives semantic transition toward the target edit. Geometric analysis and empirical measurements show that these terms are approximately orthogonal, enabling stable strength control by scaling only the steering term while keeping the fidelity term unchanged. As a result, \textit{FlowSlider} provides smooth and reliable control without post-training, improving continuous editing quality across diverse tasks.
FlowSlider:通过保真度-导向分解实现无需训练的图像连续编辑 / FlowSlider: Training-Free Continuous Image Editing via Fidelity-Steering Decomposition
这篇论文提出了一种无需额外训练、通过滑块就能平滑控制图像编辑强度的方法,它巧妙地将编辑过程分解为保持原图特征的‘保真项’和驱动内容变化的‘导向项’,从而实现既稳定又高质量的连续编辑效果。
源自 arXiv: 2604.02088