菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-29
📄 Abstract - Nonparametric LLM Evaluation from Preference Data

Evaluating the performance of large language models (LLMs) from human preference data is crucial for obtaining LLM leaderboards. However, many existing approaches either rely on restrictive parametric assumptions or lack valid uncertainty quantification when flexible machine learning methods are used. In this paper, we propose a nonparametric statistical framework, DMLEval, for comparing and ranking LLMs from preference data using debiased machine learning (DML). For this, we introduce generalized average ranking scores (GARS), which generalize commonly used ranking models, including the Bradley-Terry model or PageRank/ Rank centrality, with complex human responses such as ties. DMLEval comes with the following advantages: (i) It produces statistically efficient estimates of GARS ranking scores. (ii) It naturally allows the incorporation of black-box machine learning methods for estimation. (iii) It can be combined with pre-trained LLM evaluators (e.g., using LLM-as-a-judge). (iv) It suggests optimal policies for collecting preference data under budget constraints. We demonstrate these advantages both theoretically and empirically using both synthetic and real-world preference datasets. In summary, our framework provides practitioners with powerful, state-of-the-art methods for comparing or ranking LLMs.

顶级标签: llm model evaluation machine learning
详细标签: nonparametric evaluation preference data ranking models debiased machine learning uncertainty quantification 或 搜索:

基于偏好数据的非参数化大语言模型评估 / Nonparametric LLM Evaluation from Preference Data


1️⃣ 一句话总结

本文提出了一个名为DMLEval的非参数统计框架,它利用去偏机器学习方法,能够更灵活、更可靠地从人类偏好数据中评估和排名不同的大语言模型,同时支持结合预训练模型作为评判者,并为数据收集提供优化建议。

源自 arXiv: 2601.21816