通过去校准实现高效的置信预测 / Efficient Credal Prediction through Decalibration
1️⃣ 一句话总结
这篇论文提出了一种名为“去校准”的高效方法,能够为机器学习模型(如大型基础模型)快速生成表示认知不确定性的概率区间,从而在保持性能的同时大幅降低计算成本。
A reliable representation of uncertainty is essential for the application of modern machine learning methods in safety-critical settings. In this regard, the use of credal sets (i.e., convex sets of probability distributions) has recently been proposed as a suitable approach to representing epistemic uncertainty. However, as with other approaches to epistemic uncertainty, training credal predictors is computationally complex and usually involves (re-)training an ensemble of models. The resulting computational complexity prevents their adoption for complex models such as foundation models and multi-modal systems. To address this problem, we propose an efficient method for credal prediction that is grounded in the notion of relative likelihood and inspired by techniques for the calibration of probabilistic classifiers. For each class label, our method predicts a range of plausible probabilities in the form of an interval. To produce the lower and upper bounds of these intervals, we propose a technique that we refer to as decalibration. Extensive experiments show that our method yields credal sets with strong performance across diverse tasks, including coverage-efficiency evaluation, out-of-distribution detection, and in-context learning. Notably, we demonstrate credal prediction on models such as TabPFN and CLIP -- architectures for which the construction of credal sets was previously infeasible.
通过去校准实现高效的置信预测 / Efficient Credal Prediction through Decalibration
这篇论文提出了一种名为“去校准”的高效方法,能够为机器学习模型(如大型基础模型)快速生成表示认知不确定性的概率区间,从而在保持性能的同时大幅降低计算成本。
源自 arXiv: 2603.08495