📄
Abstract - Extending One-Step Image Generation from Class Labels to Text via Discriminative Text Representation
Few-step generation has been a long-standing goal, with recent one-step generation methods exemplified by MeanFlow achieving remarkable results. Existing research on MeanFlow primarily focuses on class-to-image generation. However, an intuitive yet unexplored direction is to extend the condition from fixed class labels to flexible text inputs, enabling richer content creation. Compared to the limited class labels, text conditions pose greater challenges to the model's understanding capability, necessitating the effective integration of powerful text encoders into the MeanFlow framework. Surprisingly, although incorporating text conditions appears straightforward, we find that integrating powerful LLM-based text encoders using conventional training strategies results in unsatisfactory performance. To uncover the underlying cause, we conduct detailed analyses and reveal that, due to the extremely limited number of refinement steps in the MeanFlow generation, such as only one step, the text feature representations are required to possess sufficiently high discriminability. This also explains why discrete and easily distinguishable class features perform well within the MeanFlow framework. Guided by these insights, we leverage a powerful LLM-based text encoder validated to possess the required semantic properties and adapt the MeanFlow generation process to this framework, resulting in efficient text-conditioned synthesis for the first time. Furthermore, we validate our approach on the widely used diffusion model, demonstrating significant generation performance improvements. We hope this work provides a general and practical reference for future research on text-conditioned MeanFlow generation. The code is available at this https URL.
通过判别性文本表示将基于类别标签的一步图像生成扩展至文本条件生成 /
Extending One-Step Image Generation from Class Labels to Text via Discriminative Text Representation
1️⃣ 一句话总结
这篇论文发现,要将高效的‘一步生成’模型从简单的类别标签条件扩展到灵活的文本描述条件,关键在于确保文本特征具有高度的判别性,并成功通过适配强大的大语言模型文本编码器实现了这一目标,显著提升了文本到图像的生成性能。