Jina-VLM:小型多语言视觉语言模型 / Jina-VLM: Small Multilingual Vision Language Model
1️⃣ 一句话总结
这篇论文提出了一个名为Jina-VLM的小型多语言视觉语言模型,它在保持高效处理任意分辨率图像的同时,在多项视觉问答评测中取得了领先的多语言性能,并且模型代码和权重已开源。
We present Jina-VLM, a 2.4B parameter vision-language model that achieves state-of-the-art multilingual visual question answering among open 2B-scale VLMs. The model couples a SigLIP2 vision encoder with a Qwen3 language backbone through an attention-pooling connector that enables token-efficient processing of arbitrary-resolution images. The model achieves leading results on standard VQA benchmarks and multilingual evaluations while preserving competitive text-only performance. Model weights and code are publicly released at this https URL .
Jina-VLM:小型多语言视觉语言模型 / Jina-VLM: Small Multilingual Vision Language Model
这篇论文提出了一个名为Jina-VLM的小型多语言视觉语言模型,它在保持高效处理任意分辨率图像的同时,在多项视觉问答评测中取得了领先的多语言性能,并且模型代码和权重已开源。