Kelix技术报告 / Kelix Technique Report
1️⃣ 一句话总结
这篇论文提出了一个名为Kelix的模型,它通过一种全新的离散视觉编码方法,成功统一了多模态数据的理解和生成能力,解决了以往视觉语言模型中离散表示理解能力不足的问题。
Autoregressive large language models (LLMs) scale well by expressing diverse tasks as sequences of discrete natural-language tokens and training with next-token prediction, which unifies comprehension and generation under self-supervision. Extending this paradigm to multimodal data requires a shared, discrete representation across modalities. However, most vision-language models (VLMs) still rely on a hybrid interface: discrete text tokens paired with continuous Vision Transformer (ViT) features. Because supervision is largely text-driven, these models are often biased toward understanding and cannot fully leverage large-scale self-supervised learning on non-text data. Recent work has explored discrete visual tokenization to enable fully autoregressive multimodal modeling, showing promising progress toward unified understanding and generation. Yet existing discrete vision tokens frequently lose information due to limited code capacity, resulting in noticeably weaker understanding than continuous-feature VLMs. We present Kelix, a fully discrete autoregressive unified model that closes the understanding gap between discrete and continuous visual representations.
Kelix技术报告 / Kelix Technique Report
这篇论文提出了一个名为Kelix的模型,它通过一种全新的离散视觉编码方法,成功统一了多模态数据的理解和生成能力,解决了以往视觉语言模型中离散表示理解能力不足的问题。
源自 arXiv: 2602.09843