菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-26
📄 Abstract - SPARTA: Scalable and Principled Benchmark of Tree-Structured Multi-hop QA over Text and Tables

Real-world Table-Text question answering (QA) tasks require models that can reason across long text and source tables, traversing multiple hops and executing complex operations such as aggregation. Yet existing benchmarks are small, manually curated - and therefore error-prone - and contain shallow questions that seldom demand more than two hops or invoke aggregations, grouping, or other advanced analytical operations expressible in natural-language queries. We present SPARTA, an end-to-end construction framework that automatically generates large-scale Table-Text QA benchmarks with lightweight human validation, requiring only one quarter of the annotation time of HybridQA. The framework first constructs a reference fact database by enriching each source table with grounding tables whose tuples are atomic facts automatically extracted from the accompanying unstructured passages, then synthesizes nested queries whose number of nested predicates matches the desired hop count. To ensure that every SQL statement is executable and that its verbalization yields a fluent, human-sounding question, we propose two novel techniques: provenance-based refinement, which rewrites any syntactically valid query that returns a non-empty result, and realistic-structure enforcement, which confines generation to post-order traversals of the query graph. The resulting pipeline produces thousands of high-fidelity question-answer pairs covering aggregations, grouping, and deep multi-hop reasoning across text and tables. On SPARTA, state-of-the-art models that reach over 70 F1 on HybridQA or over 50 F1 on OTT-QA drop by more than 30 F1 points, exposing fundamental weaknesses in current cross-modal reasoning. Our benchmark, construction code, and baseline models are available at this https URL.

顶级标签: natural language processing benchmark data
详细标签: question answering table-text reasoning multi-hop reasoning benchmark generation sql-to-text 或 搜索:

SPARTA:一种面向文本与表格的、可扩展且原理化的树状多跳问答基准测试 / SPARTA: Scalable and Principled Benchmark of Tree-Structured Multi-hop QA over Text and Tables


1️⃣ 一句话总结

这篇论文提出了一个名为SPARTA的自动化框架,它能高效生成大规模、高质量的跨文本和表格的多跳复杂问答数据集,用于更真实地评估模型在需要聚合、分组和深层推理等高级操作上的能力,并揭示了当前先进模型在此类任务上的显著不足。

源自 arXiv: 2602.23286