菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-24
📄 Abstract - Post-Selection Distributional Model Evaluation

Formal model evaluation methods typically certify that a model satisfies a prescribed target key performance indicator (KPI) level. However, in many applications, the relevant target KPI level may not be known a priori, and the user may instead wish to compare candidate models by analyzing the full trade-offs between performance and reliability achievable at test time by the models. This task, requiring the reliable estimate of the test-time KPI distributions, is made more complicated by the fact that the same data must often be used both to pre-select a subset of candidate models and to estimate their KPI distributions, causing a potential post-selection bias. In this work, we introduce post-selection distributional model evaluation (PS-DME), a general framework for statistically valid distributional model assessment after arbitrary data-dependent model pre-selection. Building on e-values, PS-DME controls post-selection false coverage rate (FCR) for the distributional KPI estimates and is proved to be more sample efficient than a baseline method based on sample splitting. Experiments on synthetic data, text-to-SQL decoding with large language models, and telecom network performance evaluation demonstrate that PS-DME enables reliable comparison of candidate configurations across a range of reliability levels, supporting the statistically reliable exploration of performance--reliability trade-offs.

顶级标签: model evaluation machine learning theory
详细标签: post-selection inference distributional evaluation e-values false coverage rate statistical reliability 或 搜索:

选择后分布模型评估 / Post-Selection Distributional Model Evaluation


1️⃣ 一句话总结

这篇论文提出了一种名为PS-DME的新方法,它能在用户从候选模型中筛选出部分模型后,依然准确、无偏差地评估这些模型在不同可靠性水平下的性能表现,从而帮助用户更可靠地权衡模型的性能与稳定性。

源自 arXiv: 2603.23055