视觉生成调优 / Visual Generation Tuning
1️⃣ 一句话总结
这项研究提出了一种名为VGT的新方法,能够高效地激发现有视觉语言模型的视觉生成潜力,使其在图像重建和生成任务上取得优异表现,为构建下一代统一的多模态基础模型开辟了新途径。
Large Vision Language Models (VLMs) effectively bridge the modality gap through extensive pretraining, acquiring sophisticated visual representations aligned with language. However, it remains underexplored whether these representations, optimized for multimodal understanding tasks, harbor an inherent potential for visual generation. In this paper, we propose VGT, Visual Generation Tuning, a novel paradigm designed to stimulate the underlying capabilities of visual generation within any vision language models. By performing efficient visual generation tuning on well-pretrained VLMs, we significantly mitigate the alignment costs and accelerate the convergence of autoregressive modeling in the continuous space (20x speedup). Specifically, we dismiss the entangled pixel-level VAEs designed for diffusion transformers and formulate VGT-AE through aligning the semantic encoders from pretrained VLMs with the latent representations of pixel decoders. In image reconstruction tasks, we achieve 26.67 PSNR and 0.50 rFID at a 28x compression ratio, outperforming specialized VAEs; in visual generation tasks, we achieve state-of-the-art outcomes among autoregressive models, 0.77 on GenEval and 78.73 on DPG-Bench. Furthermore, our proposed VGT showcases significant scaling promise and is versatile for endowing any VLMs trained for multimodal understanding with the capabilities of visual generation, which paves the new avenue to explore next-generation unified multimodal foundation models. Models and codes are available at this https URL.
视觉生成调优 / Visual Generation Tuning
这项研究提出了一种名为VGT的新方法,能够高效地激发现有视觉语言模型的视觉生成潜力,使其在图像重建和生成任务上取得优异表现,为构建下一代统一的多模态基础模型开辟了新途径。
源自 arXiv: 2511.23469