通过混合整数优化实现交叉公平 / Intersectional Fairness via Mixed-Integer Optimization
1️⃣ 一句话总结
这篇论文提出了一种基于混合整数优化的新方法,用于训练既公平又易于理解的AI分类器,它特别关注并有效解决了多个受保护群体(如种族、性别)交叉重叠时产生的复杂偏见问题,为金融、医疗等高监管行业提供了实用的解决方案。
The deployment of Artificial Intelligence in high-risk domains, such as finance and healthcare, necessitates models that are both fair and transparent. While regulatory frameworks, including the EU's AI Act, mandate bias mitigation, they are deliberately vague about the definition of bias. In line with existing research, we argue that true fairness requires addressing bias at the intersections of protected groups. We propose a unified framework that leverages Mixed-Integer Optimization (MIO) to train intersectionally fair and intrinsically interpretable classifiers. We prove the equivalence of two measures of intersectional fairness (MSD and SPSF) in detecting the most unfair subgroup and empirically demonstrate that our MIO-based algorithm improves performance in finding bias. We train high-performing, interpretable classifiers that bound intersectional bias below an acceptable threshold, offering a robust solution for regulated industries and beyond.
通过混合整数优化实现交叉公平 / Intersectional Fairness via Mixed-Integer Optimization
这篇论文提出了一种基于混合整数优化的新方法,用于训练既公平又易于理解的AI分类器,它特别关注并有效解决了多个受保护群体(如种族、性别)交叉重叠时产生的复杂偏见问题,为金融、医疗等高监管行业提供了实用的解决方案。
源自 arXiv: 2601.19595