TabReX:基于无参考可解释评估的表格生成质量评估框架 / TabReX : Tabular Referenceless eXplainable Evaluation
1️⃣ 一句话总结
这篇论文提出了一个名为TabReX的无参考评估框架,它通过将文本和生成的表格转化为知识图谱并进行对齐匹配,来量化评估大语言模型生成表格的结构和事实准确性,并提供了一个大规模基准测试来验证其优越性。
Evaluating the quality of tables generated by large language models (LLMs) remains an open challenge: existing metrics either flatten tables into text, ignoring structure, or rely on fixed references that limit generalization. We present TabReX, a reference-less, property-driven framework for evaluating tabular generation via graph-based reasoning. TabReX converts both source text and generated tables into canonical knowledge graphs, aligns them through an LLM-guided matching process, and computes interpretable, rubric-aware scores that quantify structural and factual fidelity. The resulting metric provides controllable trade-offs between sensitivity and specificity, yielding human-aligned judgments and cell-level error traces. To systematically asses metric robustness, we introduce TabReX-Bench, a large-scale benchmark spanning six domains and twelve planner-driven perturbation types across three difficulty tiers. Empirical results show that TabReX achieves the highest correlation with expert rankings, remains stable under harder perturbations, and enables fine-grained model-vs-prompt analysis establishing a new paradigm for trustworthy, explainable evaluation of structured generation systems.
TabReX:基于无参考可解释评估的表格生成质量评估框架 / TabReX : Tabular Referenceless eXplainable Evaluation
这篇论文提出了一个名为TabReX的无参考评估框架,它通过将文本和生成的表格转化为知识图谱并进行对齐匹配,来量化评估大语言模型生成表格的结构和事实准确性,并提供了一个大规模基准测试来验证其优越性。
源自 arXiv: 2512.15907