菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-05
📄 Abstract - VINO: A Unified Visual Generator with Interleaved OmniModal Context

We present VINO, a unified visual generator that performs image and video generation and editing within a single framework. Instead of relying on task-specific models or independent modules for each modality, VINO uses a shared diffusion backbone that conditions on text, images and videos, enabling a broad range of visual creation and editing tasks under one model. Specifically, VINO couples a vision-language model (VLM) with a Multimodal Diffusion Transformer (MMDiT), where multimodal inputs are encoded as interleaved conditioning tokens, and then used to guide the diffusion process. This design supports multi-reference grounding, long-form instruction following, and coherent identity preservation across static and dynamic content, while avoiding modality-specific architectural components. To train such a unified system, we introduce a multi-stage training pipeline that progressively expands a video generation base model into a unified, multi-task generator capable of both image and video input and output. Across diverse generation and editing benchmarks, VINO demonstrates strong visual quality, faithful instruction following, improved reference and attribute preservation, and more controllable multi-identity edits. Our results highlight a practical path toward scalable unified visual generation, and the promise of interleaved, in-context computation as a foundation for general-purpose visual creation.

顶级标签: multi-modal model training video generation
详细标签: unified visual generation multimodal diffusion transformer interleaved conditioning image-video editing multi-reference grounding 或 搜索:

VINO:一种具有交错式全模态上下文的统一视觉生成器 / VINO: A Unified Visual Generator with Interleaved OmniModal Context


1️⃣ 一句话总结

这篇论文提出了一个名为VINO的统一模型,它能够在一个框架内同时处理图像和视频的生成与编辑任务,通过共享的扩散主干网络和交错的多模态输入编码,实现了高质量的跨模态视觉内容创作。

源自 arXiv: 2601.02358