菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-13
📄 Abstract - MIXAR: Scaling Autoregressive Pixel-based Language Models to Multiple Languages and Scripts

Pixel-based language models are gaining momentum as alternatives to traditional token-based approaches, promising to circumvent tokenization challenges. However, the inherent perceptual diversity across languages poses a significant hurdle for multilingual generalization in pixel space. This paper introduces MIXAR, the first generative pixel-based language model trained on eight different languages utilizing a range of different scripts. We empirically evaluate MIXAR against previous pixel-based models as well as comparable tokenizer-based models, demonstrating substantial performance improvement on discriminative and generative multilingual tasks. Additionally, we show how MIXAR is robust to languages never seen during the training. These results are further strengthened when scaling the model to 0.5B parameters which not only improves its capabilities in generative tasks like LAMBADA but also its robustness when challenged with input perturbations such as orthographic attacks.

顶级标签: natural language processing model training multi-modal
详细标签: pixel-based language models multilingual script diversity autoregressive models orthographic robustness 或 搜索:

MIXAR:将基于像素的自回归语言模型扩展到多种语言和文字体系 / MIXAR: Scaling Autoregressive Pixel-based Language Models to Multiple Languages and Scripts


1️⃣ 一句话总结

这篇论文提出了首个在八种不同文字体系语言上训练的生成式像素语言模型MIXAR,它在多语言任务上性能显著优于以往模型,对未见过的语言也表现出很强的鲁棒性,并且模型规模扩大后能力进一步增强。

源自 arXiv: 2604.11575