菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-08
📄 Abstract - One Layer Is Enough: Adapting Pretrained Visual Encoders for Image Generation

Visual generative models (e.g., diffusion models) typically operate in compressed latent spaces to balance training efficiency and sample quality. In parallel, there has been growing interest in leveraging high-quality pre-trained visual representations, either by aligning them inside VAEs or directly within the generative model. However, adapting such representations remains challenging due to fundamental mismatches between understanding-oriented features and generation-friendly latent spaces. Representation encoders benefit from high-dimensional latents that capture diverse hypotheses for masked regions, whereas generative models favor low-dimensional latents that must faithfully preserve injected noise. This discrepancy has led prior work to rely on complex objectives and architectures. In this work, we propose FAE (Feature Auto-Encoder), a simple yet effective framework that adapts pre-trained visual representations into low-dimensional latents suitable for generation using as little as a single attention layer, while retaining sufficient information for both reconstruction and understanding. The key is to couple two separate deep decoders: one trained to reconstruct the original feature space, and a second that takes the reconstructed features as input for image generation. FAE is generic; it can be instantiated with a variety of self-supervised encoders (e.g., DINO, SigLIP) and plugged into two distinct generative families: diffusion models and normalizing flows. Across class-conditional and text-to-image benchmarks, FAE achieves strong performance. For example, on ImageNet 256x256, our diffusion model with CFG attains a near state-of-the-art FID of 1.29 (800 epochs) and 1.70 (80 epochs). Without CFG, FAE reaches the state-of-the-art FID of 1.48 (800 epochs) and 2.08 (80 epochs), demonstrating both high quality and fast learning.

顶级标签: computer vision model training multi-modal
详细标签: image generation latent space feature adaptation diffusion models autoencoder 或 搜索:

一层足矣:将预训练视觉编码器适配用于图像生成 / One Layer Is Enough: Adapting Pretrained Visual Encoders for Image Generation


1️⃣ 一句话总结

这篇论文提出了一个名为FAE的简单框架,它仅需一个注意力层就能将原本用于图像理解的预训练视觉特征,高效地转换成适合图像生成的低维潜在表示,从而让扩散模型等生成器能快速学习并生成高质量图像。


源自 arXiv: 2512.07829