基准幻觉:大语言模型之间的分歧及其科学后果 / Benchmark Illusion: Disagreement among LLMs and Its Scientific Consequences
1️⃣ 一句话总结
这篇论文揭示了一个‘基准幻觉’现象:在主流评测中得分相近的大语言模型,实际上对大量问题存在隐藏的严重分歧,当这些模型被用于科研数据标注时,模型选择会成为一个严重影响研究结果可复现性的隐蔽变量。
Benchmarks underpin how progress in large language models (LLMs) is measured and trusted. Yet our analyses reveal that apparent convergence in benchmark accuracy can conceal deep epistemic divergence. Using two major reasoning benchmarks - MMLU-Pro and GPQA - we show that LLMs achieving comparable accuracy still disagree on 16-66% of items, and 16-38% among top-performing frontier models. These discrepancies suggest distinct error profiles for different LLMs. When such models are used for scientific data annotation and inference, their hidden disagreements propagate into research results: in re-analyses of published studies in education and political science, switching the annotation model can change estimated treatment effects by more than 80%, and in some cases reverses their sign. Together, these findings illustrate a benchmark illusion, where equal accuracy may conceal disagreement, with model choice becoming a hidden yet consequential variable for scientific reproducibility.
基准幻觉:大语言模型之间的分歧及其科学后果 / Benchmark Illusion: Disagreement among LLMs and Its Scientific Consequences
这篇论文揭示了一个‘基准幻觉’现象:在主流评测中得分相近的大语言模型,实际上对大量问题存在隐藏的严重分歧,当这些模型被用于科研数据标注时,模型选择会成为一个严重影响研究结果可复现性的隐蔽变量。
源自 arXiv: 2602.11898