菜单

🤖 系统
📄 Abstract - Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer

The landscape of high-performance image generation models is currently dominated by proprietary systems, such as Nano Banana Pro and Seedream 4.0. Leading open-source alternatives, including Qwen-Image, Hunyuan-Image-3.0 and FLUX.2, are characterized by massive parameter counts (20B to 80B), making them impractical for inference, and fine-tuning on consumer-grade hardware. To address this gap, we propose Z-Image, an efficient 6B-parameter foundation generative model built upon a Scalable Single-Stream Diffusion Transformer (S3-DiT) architecture that challenges the &#34;scale-at-all-costs&#34; paradigm. By systematically optimizing the entire model lifecycle -- from a curated data infrastructure to a streamlined training curriculum -- we complete the full training workflow in just 314K H800 GPU hours (approx. $630K). Our few-step distillation scheme with reward post-training further yields Z-Image-Turbo, offering both sub-second inference latency on an enterprise-grade H800 GPU and compatibility with consumer-grade hardware (<16GB VRAM). Additionally, our omni-pre-training paradigm also enables efficient training of Z-Image-Edit, an editing model with impressive instruction-following capabilities. Both qualitative and quantitative experiments demonstrate that our model achieves performance comparable to or surpassing that of leading competitors across various dimensions. Most notably, Z-Image exhibits exceptional capabilities in photorealistic image generation and bilingual text rendering, delivering results that rival top-tier commercial models, thereby demonstrating that state-of-the-art results are achievable with significantly reduced computational overhead. We publicly release our code, weights, and online demo to foster the development of accessible, budget-friendly, yet state-of-the-art generative models.

顶级标签: model training computer vision aigc
详细标签: image generation diffusion transformer efficient training model distillation open-source model 或 搜索:

Z-Image:一种基于单流扩散Transformer的高效图像生成基础模型 / Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer


1️⃣ 一句话总结

这篇论文提出了一个名为Z-Image的高效开源图像生成模型,它通过创新的单流扩散Transformer架构和全流程优化,仅用6B参数就达到了媲美顶级商业模型的性能,大幅降低了计算成本和硬件门槛。


📄 打开原文 PDF