基于专家混合的预测驱动推理 / Prediction-powered Inference by Mixture of Experts
1️⃣ 一句话总结
本文提出了一种利用多种现有预测模型(像一群专家)混合协作的新方法,在只有少量标注数据但有大量未标注数据时,能更高效、更可靠地进行统计推断(如估计平均值或构建置信区间),通过自动选择或组合专家模型来获得比单独使用任何一个模型都更准确的结果。
The rapidly expanding artificial intelligence (AI) industry has produced diverse yet powerful prediction tools, each with its own network architecture, training strategy, data-processing pipeline, and domain-specific strengths. These tools create new opportunities for semi-supervised inference, in which labeled data are limited and expensive to obtain, whereas unlabeled data are abundant and widely available. Given a collection of predictors, we treat them as a mixture of experts (MOE) and introduce an MOE-powered semi-supervised inference framework built upon prediction-powered inference (PPI). Motivated by the variance reduction principle underlying PPI, the proposed framework seeks the mixture of experts that achieves the smallest possible variance. Compared with standard PPI, the MOE-powered inference framework adapts to the unknown performance of individual predictors, benefits from their collective predictive power, and enjoys a best-expert guarantee. The framework is flexible and applies to mean estimation, linear regression, quantile estimation, and general M-estimation. We develop non-asymptotic theory for the MOE-powered inference framework and establish upper bounds on the coverage error of the resulting confidence intervals. Numerical experiments demonstrate the practical effectiveness of MOE-powered inference and corroborate our theoretical findings.
基于专家混合的预测驱动推理 / Prediction-powered Inference by Mixture of Experts
本文提出了一种利用多种现有预测模型(像一群专家)混合协作的新方法,在只有少量标注数据但有大量未标注数据时,能更高效、更可靠地进行统计推断(如估计平均值或构建置信区间),通过自动选择或组合专家模型来获得比单独使用任何一个模型都更准确的结果。
源自 arXiv: 2604.27892