量化大语言模型评估中的构念效度 / Quantifying construct validity in large language model evaluations
1️⃣ 一句话总结
这篇论文提出了一种新的‘结构化能力模型’,通过结合潜在因子模型和缩放定律的优点,从大量基准测试结果中提取出可解释且可泛化的模型能力,从而更准确地评估大语言模型的真实水平,避免仅依赖有缺陷的基准分数。
The LLM community often reports benchmark results as if they are synonymous with general model capabilities. However, benchmarks can have problems that distort performance, like test set contamination and annotator error. How can we know that a benchmark is a reliable indicator of some capability that we want to measure? This question concerns the construct validity of LLM benchmarks, and it requires separating benchmark results from capabilities when we model and predict LLM performance. Both social scientists and computer scientists propose formal models - latent factor models and scaling laws - for identifying the capabilities underlying benchmark scores. However, neither technique is satisfactory for construct validity. Latent factor models ignore scaling laws, and as a result, the capabilities they extract often proxy model size. Scaling laws ignore measurement error, and as a result, the capabilities they extract are both uninterpretable and overfit to the observed benchmarks. This thesis presents the structured capabilities model, the first model to extract interpretable and generalisable capabilities from a large collection of LLM benchmark results. I fit this model and its two alternatives on a large sample of results from the OpenLLM Leaderboard. Structured capabilities outperform latent factor models on parsimonious fit indices, and exhibit better out-of-distribution benchmark prediction than scaling laws. These improvements are possible because neither existing approach separates model scale from capabilities in the appropriate way. Model scale should inform capabilities, as in scaling laws, and these capabilities should inform observed results up to measurement error, as in latent factor models. In combining these two insights, structured capabilities demonstrate better explanatory and predictive power for quantifying construct validity in LLM evaluations.
量化大语言模型评估中的构念效度 / Quantifying construct validity in large language model evaluations
这篇论文提出了一种新的‘结构化能力模型’,通过结合潜在因子模型和缩放定律的优点,从大量基准测试结果中提取出可解释且可泛化的模型能力,从而更准确地评估大语言模型的真实水平,避免仅依赖有缺陷的基准分数。
源自 arXiv: 2602.15532