菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-19
📄 Abstract - CRAFT: Aligning Diffusion Models with Fine-Tuning Is Easier Than You Think

Aligning Diffusion models has achieved remarkable breakthroughs in generating high-quality, human preference-aligned images. Existing techniques, such as supervised fine-tuning (SFT) and DPO-style preference optimization, have become principled tools for fine-tuning diffusion models. However, SFT relies on high-quality images that are costly to obtain, while DPO-style methods depend on large-scale preference datasets, which are often inconsistent in quality. Beyond data dependency, these methods are further constrained by computational inefficiency. To address these two challenges, we propose Composite Reward Assisted Fine-Tuning (CRAFT), a lightweight yet powerful fine-tuning paradigm that requires significantly reduced training data while maintaining computational efficiency. It first leverages a Composite Reward Filtering (CRF) technique to construct a high-quality and consistent training dataset and then perform an enhanced variant of SFT. We also theoretically prove that CRAFT actually optimizes the lower bound of group-based reinforcement learning, establishing a principled connection between SFT with selected data and reinforcement learning. Our extensive empirical results demonstrate that CRAFT with only 100 samples can easily outperform recent SOTA preference optimization methods with thousands of preference-paired samples. Moreover, CRAFT can even achieve 11-220$\times$ faster convergences than the baseline preference optimization methods, highlighting its extremely high efficiency.

顶级标签: model training aigc machine learning
详细标签: diffusion models fine-tuning preference alignment data efficiency reinforcement learning 或 搜索:

CRAFT:对齐扩散模型的微调比你想象的更容易 / CRAFT: Aligning Diffusion Models with Fine-Tuning Is Easier Than You Think


1️⃣ 一句话总结

本文提出了一种名为CRAFT的新型微调方法,它通过一种复合奖励筛选技术,仅需少量高质量数据就能高效地让AI图像生成模型更好地符合人类偏好,并且训练速度远超现有主流方法。

源自 arXiv: 2603.18991