菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-14
📄 Abstract - RefDecoder: Enhancing Visual Generation with Conditional Video Decoding

Video generation powers a vast array of downstream applications. However, while the de facto standard, i.e., latent diffusion models, typically employ heavily conditioned denoising networks, their decoders often remain unconditional. We observe that this architectural asymmetry leads to significant loss of detail and inconsistency relative to the input image. To address this, we argue that the decoder requires equal conditioning to preserve structural integrity. We introduce RefDecoder, a reference-conditioned video VAE decoder by injecting high-fidelity reference image signal directly into the decoding process via reference attention. Specifically, a lightweight image encoder maps the reference frame into the detail-rich high-dimensional tokens, which are co-processed with the denoised video latent tokens at each decoder up-sampling stage. We demonstrate consistent improvements across several distinct decoder backbones (e.g., Wan 2.1 and VideoVAE+), achieving up to +2.1dB PSNR over the unconditional baselines on the Inter4K, WebVid, and Large Motion reconstruction benchmarks. Notably, RefDecoder can be directly swapped into existing video generation systems without additional fine-tuning, and we report across-the-board improvements in subject consistency, background consistency, and overall quality scores on the VBench I2V benchmark. Beyond I2V, RefDecoder generalizes well to a wide range of visual generation tasks such as style transfer and video editing refinement.

顶级标签: computer vision video generation aigc
详细标签: video decoder conditional decoding reference-conditioned latent diffusion visual generation 或 搜索:

RefDecoder:通过条件视频解码增强视觉生成 / RefDecoder: Enhancing Visual Generation with Conditional Video Decoding


1️⃣ 一句话总结

这篇论文提出了一种名为RefDecoder的新方法,通过在视频生成模型的解码阶段引入参考图像信息,显著提升了生成视频的细节清晰度和与输入图像的一致性,且无需额外训练即可直接应用于现有系统。

源自 arXiv: 2605.15196