菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-17
📄 Abstract - GRAN-TED: Generating Robust, Aligned, and Nuanced Text Embedding for Diffusion Models

The text encoder is a critical component of text-to-image and text-to-video diffusion models, fundamentally determining the semantic fidelity of the generated content. However, its development has been hindered by two major challenges: the lack of an efficient evaluation framework that reliably predicts downstream generation performance, and the difficulty of effectively adapting pretrained language models for visual synthesis. To address these issues, we introduce GRAN-TED, a paradigm to Generate Robust, Aligned, and Nuanced Text Embeddings for Diffusion models. Our contribution is twofold. First, we propose TED-6K, a novel text-only benchmark that enables efficient and robust assessment of an encoder's representational quality without requiring costly end-to-end model training. We demonstrate that performance on TED-6K, standardized via a lightweight, unified adapter, strongly correlates with an encoder's effectiveness in downstream generation tasks. Notably, under our experimental setup, compared with training a diffusion model from scratch, evaluating with TED-6K is about \textbf{750$\times$ faster}. Second, guided by this validated framework, we develop a superior text encoder using a novel two-stage training paradigm. This process involves an initial fine-tuning stage on a Multimodal Large Language Model for better visual representation, followed by a layer-wise weighting method to extract more nuanced and potent text features. Our experiments show that the resulting GRAN-TED encoder not only achieves state-of-the-art performance on TED-6K but also leads to demonstrable performance gains in text-to-image and text-to-video generation. Our TED-6K dataset and evaluation code are available at the following link: this https URL.

顶级标签: model training model evaluation multi-modal
详细标签: text encoder diffusion models benchmark text-to-image text-to-video 或 搜索:

GRAN-TED:为扩散模型生成鲁棒、对齐且细致的文本嵌入 / GRAN-TED: Generating Robust, Aligned, and Nuanced Text Embedding for Diffusion Models


1️⃣ 一句话总结

这篇论文提出了一个名为GRAN-TED的新方法,它通过一个快速高效的文本基准测试和一个两阶段训练策略,显著提升了文生图/视频扩散模型中文本编码器的性能,使生成的图像和视频更精准地符合文字描述。

源自 arXiv: 2512.15560