菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-09
📄 Abstract - Learning Credal Ensembles via Distributionally Robust Optimization

Credal predictors are models that are aware of epistemic uncertainty and produce a convex set of probabilistic predictions. They offer a principled way to quantify predictive epistemic uncertainty (EU) and have been shown to improve model robustness in various settings. However, most state-of-the-art methods mainly define EU as disagreement caused by random training initializations, which mostly reflects sensitivity to optimization randomness rather than uncertainty from deeper sources. To address this, we define EU as disagreement among models trained with varying relaxations of the i.i.d. assumption between training and test data. Based on this idea, we propose CreDRO, which learns an ensemble of plausible models through distributionally robust optimization. As a result, CreDRO captures EU not only from training randomness but also from meaningful disagreement due to potential distribution shifts between training and test data. Empirical results show that CreDRO consistently outperforms existing credal methods on tasks such as out-of-distribution detection across multiple benchmarks and selective classification in medical applications.

顶级标签: machine learning model evaluation theory
详细标签: credal ensembles distributionally robust optimization epistemic uncertainty out-of-distribution detection selective classification 或 搜索:

通过分布鲁棒优化学习可信集合 / Learning Credal Ensembles via Distributionally Robust Optimization


1️⃣ 一句话总结

这篇论文提出了一种名为CreDRO的新方法,通过训练一组在不同数据分布假设下的模型来更全面地评估预测中的认知不确定性,从而在数据分布发生变化时(例如医疗应用或异常检测中)获得更可靠、更鲁棒的预测结果。

源自 arXiv: 2602.08470