预算约束下的LLM即法官 / LLM-as-Judge on a Budget
1️⃣ 一句话总结
这篇论文提出了一种在有限计算资源下,通过动态分配查询次数来更准确评估大语言模型性能的智能方法,其核心是优先将资源用于不确定性最高的评估项,从而显著降低整体评估误差。
LLM-as-a-judge has emerged as a cornerstone technique for evaluating large language models by leveraging LLM reasoning to score prompt-response pairs. Since LLM judgments are stochastic, practitioners commonly query each pair multiple times to estimate mean scores accurately. This raises a critical challenge: given a fixed computational budget $B$, how to optimally allocate queries across $K$ prompt-response pairs to minimize estimation error? % We present a principled variance-adaptive approach leveraging multi-armed bandit theory and concentration inequalities. Our method dynamically allocates queries based on estimated score variances, concentrating resources where uncertainty is highest. Further, our algorithm is shown to achieve a worst-case score-estimation error of $\tilde{O}\left(\sqrt{\frac{\sum_{i=1}^K \sigma_i^2}{B}}\right)$, $\sigma_i^2$ being the unknown score variance for pair $i \in [K]$ with near-optimal budget allocation. % Experiments on \emph{Summarize-From-Feedback} and \emph{HelpSteer2} demonstrate that our method significantly outperforms uniform allocation, reducing worst-case estimation error while maintaining identical budgets. Our work establishes a theoretical foundation for efficient LLM evaluation with practical implications for AI safety, model alignment, and automated assessment at scale.
预算约束下的LLM即法官 / LLM-as-Judge on a Budget
这篇论文提出了一种在有限计算资源下,通过动态分配查询次数来更准确评估大语言模型性能的智能方法,其核心是优先将资源用于不确定性最高的评估项,从而显著降低整体评估误差。
源自 arXiv: 2602.15481