菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-19
📄 Abstract - Provably Explaining Neural Additive Models

Despite significant progress in post-hoc explanation methods for neural networks, many remain heuristic and lack provable guarantees. A key approach for obtaining explanations with provable guarantees is by identifying a cardinally-minimal subset of input features which by itself is provably sufficient to determine the prediction. However, for standard neural networks, this task is often computationally infeasible, as it demands a worst-case exponential number of verification queries in the number of input features, each of which is NP-hard. In this work, we show that for Neural Additive Models (NAMs), a recent and more interpretable neural network family, we can efficiently generate explanations with such guarantees. We present a new model-specific algorithm for NAMs that generates provably cardinally-minimal explanations using only a logarithmic number of verification queries in the number of input features, after a parallelized preprocessing step with logarithmic runtime in the required precision is applied to each small univariate NAM component. Our algorithm not only makes the task of obtaining cardinally-minimal explanations feasible, but even outperforms existing algorithms designed to find the relaxed variant of subset-minimal explanations - which may be larger and less informative but easier to compute - despite our algorithm solving a much more difficult task. Our experiments demonstrate that, compared to previous algorithms, our approach provides provably smaller explanations than existing works and substantially reduces the computation time. Moreover, we show that our generated provable explanations offer benefits that are unattainable by standard sampling-based techniques typically used to interpret NAMs.

顶级标签: machine learning theory model evaluation
详细标签: explainable ai neural additive models provable guarantees feature attribution model verification 或 搜索:

可证明解释神经加法模型 / Provably Explaining Neural Additive Models


1️⃣ 一句话总结

这篇论文提出了一种高效的新算法,能够为‘神经加法模型’这种更易理解的神经网络,快速找到并证明一个最小、最关键的输入特征子集,从而可靠地解释模型的预测结果。

源自 arXiv: 2602.17530