📄 论文总结
Visual-TableQA:面向表格图像推理的开放领域基准 / Visual-TableQA: Open-Domain Benchmark for Reasoning over Table Images
1️⃣ 一句话总结
这篇论文提出了一个名为Visual-TableQA的大规模开放领域数据集,通过多模型协作的低成本生成方法,专门用于评估和提升视觉语言模型在复杂表格图像上的推理能力。
Visual reasoning over structured data such as tables is a critical capability for modern vision-language models (VLMs), yet current benchmarks remain limited in scale, diversity, or reasoning depth, especially when it comes to rendered table images. Addressing this gap, we introduce Visual-TableQA, a large-scale, open-domain multimodal dataset specifically designed to evaluate and enhance visual reasoning over complex tabular data. Our generation pipeline is modular, scalable, and fully autonomous, involving multiple reasoning LLMs collaborating across distinct roles: generation, validation, and inspiration. Visual-TableQA comprises 2.5k richly structured LaTeX-rendered tables and 6k reasoning-intensive QA pairs, all produced at a cost of under USD 100. To promote diversity and creativity, our pipeline performs multi-model collaborative data generation via cross-model prompting ('inspiration') and LLM-jury filtering. Stronger models seed layouts and topics that weaker models elaborate, collectively distilling diverse reasoning patterns and visual structures into the dataset. Empirical results show that models fine-tuned on Visual-TableQA generalize robustly to external benchmarks, outperforming several proprietary models despite the dataset's synthetic nature. The full pipeline and resources are publicly available at this https URL.
Visual-TableQA:面向表格图像推理的开放领域基准 / Visual-TableQA: Open-Domain Benchmark for Reasoning over Table Images
这篇论文提出了一个名为Visual-TableQA的大规模开放领域数据集,通过多模型协作的低成本生成方法,专门用于评估和提升视觉语言模型在复杂表格图像上的推理能力。