📄
Abstract - Towards Comprehensive Stage-wise Benchmarking of Large Language Models in Fact-Checking
Large Language Models (LLMs) are increasingly deployed in real-world fact-checking systems, yet existing evaluations focus predominantly on claim verification and overlook the broader fact-checking workflow, including claim extraction and evidence retrieval. This narrow focus prevents current benchmarks from revealing systematic reasoning failures, factual blind spots, and robustness limitations of modern LLMs. To bridge this gap, we present FactArena, a fully automated arena-style evaluation framework that conducts comprehensive, stage-wise benchmarking of LLMs across the complete fact-checking pipeline. FactArena integrates three key components: (i) an LLM-driven fact-checking process that standardizes claim decomposition, evidence retrieval via tool-augmented interactions, and justification-based verdict prediction; (ii) an arena-styled judgment mechanism guided by consolidated reference guidelines to ensure unbiased and consistent pairwise comparisons across heterogeneous judge agents; and (iii) an arena-driven claim-evolution module that adaptively generates more challenging and semantically controlled claims to probe LLMs' factual robustness beyond fixed seed data. Across 16 state-of-the-art LLMs spanning seven model families, FactArena produces stable and interpretable rankings. Our analyses further reveal significant discrepancies between static claim-verification accuracy and end-to-end fact-checking competence, highlighting the necessity of holistic evaluation. The proposed framework offers a scalable and trustworthy paradigm for diagnosing LLMs' factual reasoning, guiding future model development, and advancing the reliable deployment of LLMs in safety-critical fact-checking applications.
迈向大型语言模型在事实核查中的全面分阶段基准测试 /
Towards Comprehensive Stage-wise Benchmarking of Large Language Models in Fact-Checking
1️⃣ 一句话总结
这篇论文提出了一个名为FactArena的全自动评估框架,通过模拟完整的事实核查流程(包括声明提取、证据检索和最终判断)来全面测试大型语言模型的真实能力,发现仅测试最终验证环节会掩盖模型的系统性缺陷,从而为开发更可靠的事实核查AI提供了新的评估范式。