菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-30
📄 Abstract - JaWildText: A Benchmark for Vision-Language Models on Japanese Scene Text Understanding

Japanese scene text poses challenges that multilingual benchmarks often fail to capture, including mixed scripts, frequent vertical writing, and a character inventory far larger than the Latin alphabet. Although Japanese is included in several multilingual benchmarks, these resources do not adequately capture the language-specific complexities. Meanwhile, existing Japanese visual text datasets have primarily focused on scanned documents, leaving in-the-wild scene text underexplored. To fill this gap, we introduce JaWildText, a diagnostic benchmark for evaluating vision-language models (VLMs) on Japanese scene text understanding. JaWildText contains 3,241 instances from 2,961 images newly captured in Japan, with 1.12 million annotated characters spanning 3,643 unique character types. It comprises three complementary tasks that vary in visual organization, output format, and writing style: (i) Dense Scene Text Visual Question Answering (STVQA), which requires reasoning over multiple pieces of visual text evidence; (ii) Receipt Key Information Extraction (KIE), which tests layout-aware structured extraction from mobile-captured receipts; and (iii) Handwriting OCR, which evaluates page-level transcription across various media and writing directions. We evaluate 14 open-weight VLMs and find that the best model achieves an average score of 0.64 across the three tasks. Error analyses show recognition remains the dominant bottleneck, especially for kanji. JaWildText enables fine-grained, script-aware diagnosis of Japanese scene text capabilities, and will be released with evaluation code.

顶级标签: computer vision natural language processing benchmark
详细标签: scene text understanding visual question answering optical character recognition multilingual evaluation vision-language models 或 搜索:

JaWildText:一个用于评估视觉语言模型在日语场景文本理解能力的基准数据集 / JaWildText: A Benchmark for Vision-Language Models on Japanese Scene Text Understanding


1️⃣ 一句话总结

这篇论文提出了一个专门针对日语自然场景文本理解的新基准数据集JaWildText,它通过三个互补任务来全面评估视觉语言模型在处理日语特有的混合文字、竖排书写和大字符集等复杂情况时的能力,并发现当前模型在识别汉字方面仍存在主要瓶颈。

源自 arXiv: 2603.27942