菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - Relevance Feedback in Text-to-Image Diffusion: A Training-Free And Model-Agnostic Interactive Framework

Text-to-image generation using diffusion models has achieved remarkable success. However, users often possess clear visual intents but struggle to express them precisely in language, resulting in ambiguous prompts and misaligned images. Existing methods struggle to bridge this gap, typically relying on high-load textual dialogues, opaque black-box inferences, or expensive fine-tuning. They fail to simultaneously achieve low cognitive load, interpretable preference inference, and remain training-free and model-agnostic. To address this, we propose RFD, an interactive framework that adapts the relevance feedback mechanism from information retrieval to diffusion models. In RFD, users replace explicit textual dialogue with implicit, multi-select visual feedback to minimize cognitive load, easily expressing complex, multi-dimensional preferences. To translate feedback into precise generative guidance, we construct an expert-curated feature repository and introduce an information-theoretic weighted cumulative preference analysis. This white-box method calculates preferences from current-round feedback and incrementally accumulates them, avoiding the concatenation of historical interactions and preventing inference degradation caused by lengthy contexts. Furthermore, RFD employs a probabilistic sampling mechanism for prompt reconstruction to balance exploitation and exploration, preventing output homogenization. Crucially, RFD operates entirely within the external text space, making it strictly training-free and model-agnostic as a universal plug-and-play solution. Extensive experiments demonstrate that RFD effectively captures the user's true visual intent, significantly outperforming baselines in preference alignment.

顶级标签: model training model evaluation natural language processing
详细标签: text-to-image generation diffusion models relevance feedback interactive framework preference alignment 或 搜索:

文本到图像扩散模型中的相关性反馈:一种免训练且模型无关的交互式框架 / Relevance Feedback in Text-to-Image Diffusion: A Training-Free And Model-Agnostic Interactive Framework


1️⃣ 一句话总结

这篇论文提出了一个名为RFD的交互式框架,它允许用户通过简单的视觉选择反馈来精准表达生成意图,无需训练或修改底层模型,就能显著提升文本到图像生成结果与用户真实想法的匹配度。

源自 arXiv: 2603.14936