菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-14
📄 Abstract - Shapley Value-Guided Adaptive Ensemble Learning for Explainable Financial Fraud Detection with U.S. Regulatory Compliance Validation

Financial crime costs U.S. institutions over $32 billion each year. Although AI tools for fraud detection have become more advanced, their use in real-world systems still faces a major obstacle: many of these models operate as black boxes that cannot provide the transparent, auditable explanations required by regulations such as OCC Bulletin 2011-12 and Federal Reserve SR 11-7. This study makes three main contributions. First, it offers a thorough evaluation of explanation quality across faithfulness (sufficiency and comprehensiveness at k=5, 10, and 15) and stability (Kendall's W across 30 bootstrap samples). XGBoost paired with TreeExplainer achieves near-perfect stability (W=0.9912), while LSTM with DeepExplainer shows weak results (W=0.4962). Second, the paper introduces the SHAP-Guided Adaptive Ensemble (SGAE), which dynamically adjusts per-transaction ensemble weights based on SHAP attribution agreement, achieving the highest AUC-ROC among all tested models (0.8837 held-out; 0.9245 cross-validation). Third, a complete three-architecture evaluation of LSTM, Transformer, and GNN-GraphSAGE on the full 590,540-transaction IEEE-CIS dataset is provided, with GNN-GraphSAGE achieving AUC-ROC 0.9248 and F1=0.6013. All results are mapped directly to OCC, SR 11-7, and BSA-AML regulatory compliance requirements.

顶级标签: financial model evaluation machine learning
详细标签: explainable ai fraud detection ensemble learning shapley values regulatory compliance 或 搜索:

基于Shapley值的自适应集成学习用于可解释的金融欺诈检测及美国监管合规性验证 / Shapley Value-Guided Adaptive Ensemble Learning for Explainable Financial Fraud Detection with U.S. Regulatory Compliance Validation


1️⃣ 一句话总结

本研究提出了一种基于SHAP值动态调整权重的自适应集成学习方法,在提升金融欺诈检测准确率的同时,通过量化评估模型解释的稳定性和忠实度,满足了美国金融监管机构对AI模型透明度和可审计性的合规要求。

源自 arXiv: 2604.14231