菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-16
📄 Abstract - Structural interpretability in SVMs with truncated orthogonal polynomial kernels

We study post-training interpretability for Support Vector Machines (SVMs) built from truncated orthogonal polynomial kernels. Since the associated reproducing kernel Hilbert space is finite-dimensional and admits an explicit tensor-product orthonormal basis, the fitted decision function can be expanded exactly in intrinsic RKHS coordinates. This leads to Orthogonal Representation Contribution Analysis (ORCA), a diagnostic framework based on normalized Orthogonal Kernel Contribution (OKC) indices. These indices quantify how the squared RKHS norm of the classifier is distributed across interaction orders, total polynomial degrees, marginal coordinate effects, and pairwise contributions. The methodology is fully post-training and requires neither surrogate models nor retraining. We illustrate its diagnostic value on a synthetic double-spiral problem and on a real five-dimensional echocardiogram dataset. The results show that the proposed indices reveal structural aspects of model complexity that are not captured by predictive accuracy alone.

顶级标签: machine learning theory model evaluation
详细标签: interpretability support vector machines kernel methods model diagnostics orthogonal polynomials 或 搜索:

基于截断正交多项式核的支持向量机结构可解释性研究 / Structural interpretability in SVMs with truncated orthogonal polynomial kernels


1️⃣ 一句话总结

该论文提出了一种名为ORCA的诊断框架,用于分析使用截断正交多项式核的支持向量机模型,通过量化分类器在不同特征交互和多项式阶数上的贡献,直观地揭示了模型的内在结构复杂性,而无需重新训练或使用替代模型。

源自 arXiv: 2604.15285