用于深度学习不确定性估计的蒙特卡洛随机深度方法 / Monte Carlo Stochastic Depth for Uncertainty Estimation in Deep Learning
1️⃣ 一句话总结
这篇论文为深度学习中的‘随机深度’正则化方法建立了理论依据,并将其发展成一种高效、可靠的贝叶斯不确定性估计工具,在目标检测等复杂任务上取得了与主流方法相当甚至更优的校准和不确定性排序性能。
The deployment of deep neural networks in safety-critical systems necessitates reliable and efficient uncertainty quantification (UQ). A practical and widespread strategy for UQ is repurposing stochastic regularizers as scalable approximate Bayesian inference methods, such as Monte Carlo Dropout (MCD) and MC-DropBlock (MCDB). However, this paradigm remains under-explored for Stochastic Depth (SD), a regularizer integral to the residual-based backbones of most modern architectures. While prior work demonstrated its empirical promise for segmentation, a formal theoretical connection to Bayesian variational inference and a benchmark on complex, multi-task problems like object detection are missing. In this paper, we first provide theoretical insights connecting Monte Carlo Stochastic Depth (MCSD) to principled approximate variational inference. We then present the first comprehensive empirical benchmark of MCSD against MCD and MCDB on state-of-the-art detectors (YOLO, RT-DETR) using the COCO and COCO-O datasets. Our results position MCSD as a robust and computationally efficient method that achieves highly competitive predictive accuracy (mAP), notably yielding slight improvements in calibration (ECE) and uncertainty ranking (AUARC) compared to MCD. We thus establish MCSD as a theoretically-grounded and empirically-validated tool for efficient Bayesian approximation in modern deep learning.
用于深度学习不确定性估计的蒙特卡洛随机深度方法 / Monte Carlo Stochastic Depth for Uncertainty Estimation in Deep Learning
这篇论文为深度学习中的‘随机深度’正则化方法建立了理论依据,并将其发展成一种高效、可靠的贝叶斯不确定性估计工具,在目标检测等复杂任务上取得了与主流方法相当甚至更优的校准和不确定性排序性能。
源自 arXiv: 2604.12719