菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-02
📄 Abstract - Adaptive Confidence Regularization for Multimodal Failure Detection

The deployment of multimodal models in high-stakes domains, such as self-driving vehicles and medical diagnostics, demands not only strong predictive performance but also reliable mechanisms for detecting failures. In this work, we address the largely unexplored problem of failure detection in multimodal contexts. We propose Adaptive Confidence Regularization (ACR), a novel framework specifically designed to detect multimodal failures. Our approach is driven by a key observation: in most failure cases, the confidence of the multimodal prediction is significantly lower than that of at least one unimodal branch, a phenomenon we term confidence degradation. To mitigate this, we introduce an Adaptive Confidence Loss that penalizes such degradations during training. In addition, we propose Multimodal Feature Swapping, a novel outlier synthesis technique that generates challenging, failure-aware training examples. By training with these synthetic failures, ACR learns to more effectively recognize and reject uncertain predictions, thereby improving overall reliability. Extensive experiments across four datasets, three modalities, and multiple evaluation settings demonstrate that ACR achieves consistent and robust gains. The source code will be available at this https URL.

顶级标签: multi-modal model evaluation machine learning
详细标签: failure detection confidence calibration outlier synthesis multimodal fusion reliability 或 搜索:

用于多模态故障检测的自适应置信度正则化 / Adaptive Confidence Regularization for Multimodal Failure Detection


1️⃣ 一句话总结

这篇论文提出了一种名为自适应置信度正则化的新方法,通过惩罚多模态预测时出现的信心下降现象,并结合一种生成模拟故障样本的技术,有效提升了自动驾驶、医疗诊断等高风险应用中多模态模型的故障检测可靠性。

源自 arXiv: 2603.02200