菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-15
📄 Abstract - Towards Scalable Pre-training of Visual Tokenizers for Generation

The quality of the latent space in visual tokenizers (e.g., VAEs) is crucial for modern generative models. However, the standard reconstruction-based training paradigm produces a latent space that is biased towards low-level information, leading to a foundation flaw: better pixel-level accuracy does not lead to higher-quality generation. This implies that pouring extensive compute into visual tokenizer pre-training translates poorly to improved performance in generation. We identify this as the ``pre-training scaling problem`` and suggest a necessary shift: to be effective for generation, a latent space must concisely represent high-level semantics. We present VTP, a unified visual tokenizer pre-training framework, pioneering the joint optimization of image-text contrastive, self-supervised, and reconstruction losses. Our large-scale study reveals two principal findings: (1) understanding is a key driver of generation, and (2) much better scaling properties, where generative performance scales effectively with compute, parameters, and data allocated to the pretraining of the visual tokenizer. After large-scale pre-training, our tokenizer delivers a competitive profile (78.2 zero-shot accuracy and 0.36 rFID on ImageNet) and 4.1 times faster convergence on generation compared to advanced distillation methods. More importantly, it scales effectively: without modifying standard DiT training specs, solely investing more FLOPS in pretraining VTP achieves 65.8\% FID improvement in downstream generation, while conventional autoencoder stagnates very early at 1/10 FLOPS. Our pre-trained models are available at this https URL.

顶级标签: model training computer vision aigc
详细标签: visual tokenizer generative models latent space pre-training scaling 或 搜索:

面向生成任务的可扩展视觉分词器预训练研究 / Towards Scalable Pre-training of Visual Tokenizers for Generation


1️⃣ 一句话总结

这篇论文发现传统视觉分词器(如VAE)的预训练存在‘缩放问题’,即单纯追求像素级重建精度无助于提升生成质量,并提出了一种名为VTP的新框架,通过联合优化多种损失函数来让模型学习高级语义,从而实现了生成性能随计算资源投入的有效提升。


源自 arXiv: 2512.13687