菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-06
📄 Abstract - Rethinking Training Dynamics in Scale-wise Autoregressive Generation

Recent advances in autoregressive (AR) generative models have produced increasingly powerful systems for media synthesis. Among them, next-scale prediction has emerged as a popular paradigm, where models generate images in a coarse-to-fine manner. However, scale-wise AR models suffer from exposure bias, which undermines generation quality. We identify two primary causes of this issue: (1) train-test mismatch, where the model must rely on its own imperfect predictions during inference, and (2) imbalance in scale-wise learning difficulty, where certain scales exhibit disproportionately higher optimization complexity. Through a comprehensive analysis of training dynamics, we propose Self-Autoregressive Refinement (SAR) to address these limitations. SAR introduces a Stagger-Scale Rollout (SSR) mechanism that performs lightweight autoregressive rollouts to expose the model to its own intermediate predictions, thereby aligning train-test patterns, and a complementary Contrastive Student-Forcing Loss (CSFL) that provides adequate supervision for self-generated contexts to ensure stable training. Experimental results show that applying SAR to pretrained AR models consistently improves generation quality with minimal computational overhead. For instance, SAR yields a 5.2% FID reduction on FlexVAR-d16 trained on ImageNet 256 within 10 epochs (5 hours on 32xA100 GPUs). Given its efficiency, scalability, and effectiveness, we expect SAR to serve as a reliable post-training method for visual autoregressive generation.

顶级标签: model training aigc computer vision
详细标签: autoregressive generation exposure bias training dynamics image generation coarse-to-fine 或 搜索:

重新思考逐尺度自回归生成中的训练动态 / Rethinking Training Dynamics in Scale-wise Autoregressive Generation


1️⃣ 一句话总结

本文提出了一种名为“自自回归精炼”的新方法,通过改进训练过程来减少模型在生成图像时因预测误差累积导致的质量下降问题,从而高效提升现有自回归模型的生成效果。


源自 arXiv: 2512.06421