📄 论文总结
用千言生成一图:通过结构化描述增强文本到图像生成 / Generating an Image From 1,000 Words: Enhancing Text-to-Image With Structured Captions
1️⃣ 一句话总结
这篇论文通过训练首个基于长结构化描述的开源文本生成图像模型,并引入新的融合机制和评估方法,解决了传统模型因输入文本简短而导致的控制力不足问题,显著提升了生成图像的精确性和可控性。
Text-to-image models have rapidly evolved from casual creative tools to professional-grade systems, achieving unprecedented levels of image quality and realism. Yet, most models are trained to map short prompts into detailed images, creating a gap between sparse textual input and rich visual outputs. This mismatch reduces controllability, as models often fill in missing details arbitrarily, biasing toward average user preferences and limiting precision for professional use. We address this limitation by training the first open-source text-to-image model on long structured captions, where every training sample is annotated with the same set of fine-grained attributes. This design maximizes expressive coverage and enables disentangled control over visual factors. To process long captions efficiently, we propose DimFusion, a fusion mechanism that integrates intermediate tokens from a lightweight LLM without increasing token length. We also introduce the Text-as-a-Bottleneck Reconstruction (TaBR) evaluation protocol. By assessing how well real images can be reconstructed through a captioning-generation loop, TaBR directly measures controllability and expressiveness, even for very long captions where existing evaluation methods fail. Finally, we demonstrate our contributions by training the large-scale model FIBO, achieving state-of-the-art prompt alignment among open-source models. Model weights are publicly available at this https URL
用千言生成一图:通过结构化描述增强文本到图像生成 / Generating an Image From 1,000 Words: Enhancing Text-to-Image With Structured Captions
这篇论文通过训练首个基于长结构化描述的开源文本生成图像模型,并引入新的融合机制和评估方法,解决了传统模型因输入文本简短而导致的控制力不足问题,显著提升了生成图像的精确性和可控性。