基于区间的AUC(iAUC):将ROC分析扩展至不确定性感知分类 / Interval-Based AUC (iAUC): Extending ROC Analysis to Uncertainty-Aware Classification
1️⃣ 一句话总结
这篇论文提出了一种新的评估框架,通过引入两个新指标(AUC_L和AUC_U)来量化区间预测模型的不确定性,从而在风险预测中实现更可靠的排序和决策。
In high-stakes risk prediction, quantifying uncertainty through interval-valued predictions is essential for reliable decision-making. However, standard evaluation tools like the receiver operating characteristic (ROC) curve and the area under the curve (AUC) are designed for point scores and fail to capture the impact of predictive uncertainty on ranking performance. We propose an uncertainty-aware ROC framework specifically for interval-valued predictions, introducing two new measures: $AUC_L$ and $AUC_U$. This framework enables an informative three-region decomposition of the ROC plane, partitioning pairwise rankings into correct, incorrect, and uncertain orderings. This approach naturally supports selective prediction by allowing models to abstain from ranking cases with overlapping intervals, thereby optimizing the trade-off between abstention rate and discriminative reliability. We prove that under valid class-conditional coverage, $AUC_L$ and $AUC_U$ provide formal lower and upper bounds on the theoretical optimal AUC ($AUC^*$), characterizing the physical limit of achievable discrimination. The proposed framework applies broadly to interval-valued prediction models, regardless of the interval construction method. Experiments on real-world benchmark datasets, using bootstrap-based intervals as one instantiation, validate the framework's correctness and demonstrate its practical utility for uncertainty-aware evaluation and decision-making.
基于区间的AUC(iAUC):将ROC分析扩展至不确定性感知分类 / Interval-Based AUC (iAUC): Extending ROC Analysis to Uncertainty-Aware Classification
这篇论文提出了一种新的评估框架,通过引入两个新指标(AUC_L和AUC_U)来量化区间预测模型的不确定性,从而在风险预测中实现更可靠的排序和决策。
源自 arXiv: 2602.04775