菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-23
📄 Abstract - Unbiased Prevalence Estimation with Multicalibrated LLMs

Estimating the prevalence of a category in a population using imperfect measurement devices (diagnostic tests, classifiers, or large language models) is fundamental to science, public health, and online trust and safety. Standard approaches correct for known device error rates but assume these rates remain stable across populations. We show this assumption fails under covariate shift and that multicalibration, which enforces calibration conditional on the input features rather than just on average, is sufficient for unbiased prevalence estimation under such shift. Standard calibration and quantification methods fail to provide this guarantee. Our work connects recent theoretical work on fairness to a longstanding measurement problem spanning nearly all academic disciplines. A simulation confirms that standard methods exhibit bias growing with shift magnitude, while a multicalibrated estimator maintains near-zero bias. While we focus the discussion mostly on LLMs, our theoretical results apply to any classification model. Two empirical applications -- estimating employment prevalence across U.S. states using the American Community Survey, and classifying political texts across four countries using an LLM -- demonstrate that multicalibration substantially reduces bias in practice, while highlighting that calibration data should cover the key feature dimensions along which target populations may differ.

顶级标签: llm machine learning model evaluation
详细标签: prevalence estimation multicalibration bias correction covariate shift classification 或 搜索:

使用多校准大型语言模型进行无偏的流行率估计 / Unbiased Prevalence Estimation with Multicalibrated LLMs


1️⃣ 一句话总结

本文提出,通过多校准技术(确保模型在不同输入特征下均保持校准)而非传统平均校准,可以显著消除在人群特征分布变化时(如跨地区或跨场景)使用大语言模型或分类器进行类别比例估计的系统性偏差,并通过模拟和实际案例验证了这一方法的效果。

源自 arXiv: 2604.21549