菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-07
📄 Abstract - Autoregressive Visual Generation Needs a Prologue

In this work, we propose Prologue, an approach to bridging the reconstruction-generation gap in autoregressive (AR) image generation. Instead of modifying visual tokens to satisfy both reconstruction and generation, Prologue generates a small set of prologue tokens prepended to the visual token sequence. These prologue tokens are trained exclusively with the AR cross-entropy (CE) loss, while visual tokens remain dedicated to reconstruction. This decoupled design lets us optimize generation through the AR model's true distribution without affecting reconstruction quality, which we further formalize from an ELBO perspective. On ImageNet 256x256, Prologue-Base reduces gFID from 21.01 to 10.75 without classifier-free guidance while keeping reconstruction almost unchanged; Prologue-Large reaches a competitive rFID of 0.99 and gFID of 1.46 using a standard AR model without auxiliary semantic supervision. Interestingly, driven only by AR gradients, prologue tokens exhibit emergent semantic structure: linear probing on 16 prologue tokens reaches 35.88% Top-1, far above the 23.71% of the first 16 tokens from a standard tokenizer; resampling with fixed prologue tokens preserves a similar high-level semantic layout. Our results suggest a new direction: generation quality can be improved by introducing a separate learned generative representation while leaving the original representation intact.

顶级标签: computer vision aigc model training
详细标签: autoregressive image generation representation learning token design reconstruction-generation gap 或 搜索:

自回归视觉生成需要一个“序言” / Autoregressive Visual Generation Needs a Prologue


1️⃣ 一句话总结

本文提出了一种名为Prologue的方法,通过在图像序列前添加一组专门用于生成的小“序言”标记,将重建和生成任务解耦,在不影响图像重建质量的前提下,大幅提升了自回归图像生成的性能,并意外发现这些标记能自动学会高层的语义结构。

源自 arXiv: 2605.06137