菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - FairMed-XGB: A Bayesian-Optimised Multi-Metric Framework with Explainability for Demographic Equity in Critical Healthcare Data

Machine learning models deployed in critical care settings exhibit demographic biases, particularly gender disparities, that undermine clinical trust and equitable treatment. This paper introduces FairMed-XGB, a novel framework that systematically detects and mitigates gender-based prediction bias while preserving model performance and transparency. The framework integrates a fairness-aware loss function combining Statistical Parity Difference, Theil Index, and Wasserstein Distance, jointly optimised via Bayesian Search into an XGBoost classifier. Post-mitigation evaluation on seven clinically distinct cohorts derived from the MIMIC-IV-ED and eICU databases demonstrates substantial bias reduction: Statistical Parity Difference decreases by 40 to 51 percent on MIMIC-IV-ED and 10 to 19 percent on eICU; Theil Index collapses by four to five orders of magnitude to near-zero values; Wasserstein Distance is reduced by 20 to 72 percent. These gains are achieved with negligible degradation in predictive accuracy (AUC-ROC drop <0.02). SHAP-based explainability reveals that the framework diminishes reliance on gender-proxy features, providing clinicians with actionable insights into how and where bias is corrected. FairMed-XGB offers a robust, interpretable, and ethically aligned solution for equitable clinical decision-making, paving the way for trustworthy deployment of AI in high-stakes healthcare environments.

顶级标签: medical machine learning model evaluation
详细标签: fairness healthcare bias mitigation xgboost explainable ai 或 搜索:

FairMed-XGB:一个用于关键医疗数据中人口公平性的、经过贝叶斯优化的多指标可解释框架 / FairMed-XGB: A Bayesian-Optimised Multi-Metric Framework with Explainability for Demographic Equity in Critical Healthcare Data


1️⃣ 一句话总结

这篇论文提出了一个名为FairMed-XGB的新框架,它通过结合多种公平性指标并利用贝叶斯优化技术,在保持高预测精度的同时,显著降低了重症监护机器学习模型中的性别偏见,并且能向医生解释偏见是如何被修正的。

源自 arXiv: 2603.14947