一种无需真实标签评估大语言模型的法官感知排序框架 / A Judge-Aware Ranking Framework for Evaluating Large Language Models without Ground Truth
1️⃣ 一句话总结
这篇论文提出了一种新的评估方法,通过考虑不同AI‘裁判’的可靠性差异来更准确地给大语言模型排名,无需标准答案,从而得到更可信、更高效的评估结果。
Evaluating large language models (LLMs) on open-ended tasks without ground-truth labels is increasingly done via the LLM-as-a-judge paradigm. A critical but under-modeled issue is that judge LLMs differ substantially in reliability; treating all judges equally can yield biased leaderboards and misleading uncertainty estimates. More data can make evaluation more confidently wrong under misspecified aggregation. We propose a judge-aware ranking framework that extends the Bradley-Terry-Luce model by introducing judge-specific discrimination parameters, jointly estimating latent model quality and judge reliability from pairwise comparisons without reference labels. We establish identifiability up to natural normalizations and prove consistency and asymptotic normality of the maximum likelihood estimator, enabling confidence intervals for score differences and rank comparisons. Across multiple public benchmarks and a newly collected dataset, our method improves agreement with human preferences, achieves higher data efficiency than unweighted baselines, and produces calibrated uncertainty quantification for LLM rankings.
一种无需真实标签评估大语言模型的法官感知排序框架 / A Judge-Aware Ranking Framework for Evaluating Large Language Models without Ground Truth
这篇论文提出了一种新的评估方法,通过考虑不同AI‘裁判’的可靠性差异来更准确地给大语言模型排名,无需标准答案,从而得到更可信、更高效的评估结果。
源自 arXiv: 2601.21817