通过基础的下一个词元预测统一理解、生成与编辑的简单基线模型 / A Simple Baseline for Unifying Understanding, Generation, and Editing via Vanilla Next-token Prediction
1️⃣ 一句话总结
这篇论文提出了一个名为Wallaroo的简单自回归模型,它仅使用基础的下一个词元预测技术,就能同时处理多模态理解、图像生成和编辑任务,并在实验中展现出与现有统一模型相当甚至更优的性能。
In this work, we introduce Wallaroo, a simple autoregressive baseline that leverages next-token prediction to unify multi-modal understanding, image generation, and editing at the same time. Moreover, Wallaroo supports multi-resolution image input and output, as well as bilingual support for both Chinese and English. We decouple the visual encoding into separate pathways and apply a four-stage training strategy to reshape the model's capabilities. Experiments are conducted on various benchmarks where Wallaroo produces competitive performance or exceeds other unified models, suggesting the great potential of autoregressive models in unifying multi-modality understanding and generation. Our code is available at this https URL.
通过基础的下一个词元预测统一理解、生成与编辑的简单基线模型 / A Simple Baseline for Unifying Understanding, Generation, and Editing via Vanilla Next-token Prediction
这篇论文提出了一个名为Wallaroo的简单自回归模型,它仅使用基础的下一个词元预测技术,就能同时处理多模态理解、图像生成和编辑任务,并在实验中展现出与现有统一模型相当甚至更优的性能。
源自 arXiv: 2603.04980