菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - An Empirical Analysis of Calibration and Selective Prediction in Multimodal Clinical Condition Classification

As artificial intelligence systems move toward clinical deployment, ensuring reliable prediction behavior is fundamental for safety-critical decision-making tasks. One proposed safeguard is selective prediction, where models can defer uncertain predictions to human experts for review. In this work, we empirically evaluate the reliability of uncertainty-based selective prediction in multilabel clinical condition classification using multimodal ICU data. Across a range of state-of-the-art unimodal and multimodal models, we find that selective prediction can substantially degrade performance despite strong standard evaluation metrics. This failure is driven by severe class-dependent miscalibration, whereby models assign high uncertainty to correct predictions and low uncertainty to incorrect ones, particularly for underrepresented clinical conditions. Our results show that commonly used aggregate metrics can obscure these effects, limiting their ability to assess selective prediction behavior in this setting. Taken together, our findings characterize a task-specific failure mode of selective prediction in multimodal clinical condition classification and highlight the need for calibration-aware evaluation to provide strong guarantees of safety and robustness in clinical AI.

顶级标签: medical model evaluation machine learning
详细标签: selective prediction calibration clinical ai multimodal classification uncertainty estimation 或 搜索:

多模态临床状况分类中校准与选择性预测的实证分析 / An Empirical Analysis of Calibration and Selective Prediction in Multimodal Clinical Condition Classification


1️⃣ 一句话总结

这篇论文研究发现,在基于多模态ICU数据的临床状况分类任务中,尽管模型的标准评估指标表现良好,但其不确定性估计存在严重校准问题,导致模型在选择性预测(即将不确定的预测交由专家审核)时性能大幅下降,尤其对少数类别的预测不可靠,这揭示了当前评估方法在保障临床AI安全方面的局限性。

源自 arXiv: 2603.02719