📄
Abstract - Credal Concept Bottleneck Models for Epistemic-Aleatoric Uncertainty Decomposition
Concept Bottleneck Models (CBMs) predict through human-interpretable concepts, but they typically output point concept probabilities that conflate epistemic uncertainty (reducible model underspecification) with aleatoric uncertainty (irreducible input ambiguity). This makes concept-level uncertainty hard to interpret and, more importantly, hard to act upon. We introduce CREDENCE (Credal Ensemble Concept Estimation), a CBM framework that decomposes concept uncertainty by construction. CREDENCE represents each concept as a credal prediction (a probability interval), derives epistemic uncertainty from disagreement across diverse concept heads, and estimates aleatoric uncertainty via a dedicated ambiguity output trained to match annotator disagreement when available. The resulting signals support prescriptive decisions: automate low-uncertainty cases, prioritize data collection for high-epistemic cases, route high-aleatoric cases to human review, and abstain when both are high. Across several tasks, we show that epistemic uncertainty is positively associated with prediction errors, whereas aleatoric uncertainty closely tracks annotator disagreement, providing guidance beyond error correlation. Our implementation is available at the following link: this https URL
用于认知-偶然不确定性分解的置信概念瓶颈模型 /
Credal Concept Bottleneck Models for Epistemic-Aleatoric Uncertainty Decomposition
1️⃣ 一句话总结
该论文提出了一种名为CREDENCE的框架,通过在概念瓶颈模型中引入概率区间和多样化预测头,统一分解模型的不确定性来源(可减少的认知不确定性与不可减少的偶然不确定性),从而指导模型在低不确定性时自动决策、高不确定性时转交人工审核或补充数据,显著提升了模型的可解释性和安全性。