菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-14
📄 Abstract - PromptEcho: Annotation-Free Reward from Vision-Language Models for Text-to-Image Reinforcement Learning

Reinforcement learning (RL) can improve the prompt following capability of text-to-image (T2I) models, yet obtaining high-quality reward signals remains challenging: CLIP Score is too coarse-grained, while VLM-based reward models (e.g., RewardDance) require costly human-annotated preference data and additional fine-tuning. We propose PromptEcho, a reward construction method that requires \emph{no} annotation and \emph{no} reward model training. Given a generated image and a guiding query, PromptEcho computes the token-level cross-entropy loss of a frozen VLM with the original prompt as the label, directly extracting the image-text alignment knowledge encoded during VLM pretraining. The reward is deterministic, computationally efficient, and improves automatically as stronger open-source VLMs become available. For evaluation, we develop DenseAlignBench, a benchmark of concept-rich dense captions for rigorously testing prompt following capability. Experimental results on two state-of-the-art T2I models (Z-Image and QwenImage-2512) demonstrate that PromptEcho achieves substantial improvements on DenseAlignBench (+26.8pp / +16.2pp net win rate), along with consistent gains on GenEval, DPG-Bench, and TIIFBench without any task-specific training. Ablation studies confirm that PromptEcho comprehensively outperforms inference-based scoring with the same VLM, and that reward quality scales with VLM size. We will open-source the trained models and the DenseAlignBench.

顶级标签: model training multi-modal machine learning
详细标签: reinforcement learning text-to-image vision-language models reward modeling benchmark 或 搜索:

PromptEcho:从视觉语言模型中获取无需标注的奖励,用于文本到图像强化学习 / PromptEcho: Annotation-Free Reward from Vision-Language Models for Text-to-Image Reinforcement Learning


1️⃣ 一句话总结

这篇论文提出了一种名为PromptEcho的新方法,它无需人工标注或额外训练,直接利用现成的视觉语言模型来生成高质量的奖励信号,从而有效提升文本生成图像模型对复杂文字描述的遵循能力。

源自 arXiv: 2604.12652