菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-14
📄 Abstract - Socrates Loss: Unifying Confidence Calibration and Classification by Leveraging the Unknown

Deep neural networks, despite their high accuracy, often exhibit poor confidence calibration, limiting their reliability in high-stakes applications. Current ad-hoc confidence calibration methods attempt to fix this during training but face a fundamental trade-off: two-phase training methods achieve strong classification performance at the cost of training instability and poorer confidence calibration, while single-loss methods are stable but underperform in classification. This paper addresses and mitigates this stability-performance trade-off. We propose Socrates Loss, a novel, unified loss function that explicitly leverages uncertainty by incorporating an auxiliary unknown class, whose predictions directly influence the loss function and a dynamic uncertainty penalty. This unified objective allows the model to be optimized for both classification and confidence calibration simultaneously, without the instability of complex, scheduled losses. We provide theoretical guarantees that our method regularizes the model to prevent miscalibration and overfitting. Across four benchmark datasets and multiple architectures, our comprehensive experiments demonstrate that Socrates Loss consistently improves training stability while achieving more favorable accuracy-calibration trade-off, often converging faster than existing methods.

顶级标签: model training model evaluation machine learning
详细标签: confidence calibration loss function uncertainty neural networks classification 或 搜索:

苏格拉底损失函数:通过利用未知类别统一置信度校准与分类 / Socrates Loss: Unifying Confidence Calibration and Classification by Leveraging the Unknown


1️⃣ 一句话总结

这篇论文提出了一种名为‘苏格拉底损失’的新方法,它通过引入一个‘未知’类别来同时优化神经网络的分类准确性和预测置信度的可靠性,从而解决了现有方法在稳定性和性能之间难以兼顾的问题。

源自 arXiv: 2604.12245