带公平性约束的投影提升:量化公平训练分布的成本 / Projected Boosting with Fairness Constraints: Quantifying the Cost of Fair Training Distributions
1️⃣ 一句话总结
这篇论文提出了一种名为FairBoost的新方法,它在保持提升算法理论分析能力的同时,通过将训练分布投影到满足公平性约束的集合上,来量化并控制追求公平性所导致的模型准确率下降成本。
Boosting algorithms enjoy strong theoretical guarantees: when weak learners maintain positive edge, AdaBoost achieves geometric decrease of exponential loss. We study how to incorporate group fairness constraints into boosting while preserving analyzable training dynamics. Our approach, FairBoost, projects the ensemble-induced exponential-weights distribution onto a convex set of distributions satisfying fairness constraints (as a reweighting surrogate), then trains weak learners on this fair distribution. The key theoretical insight is that projecting the training distribution reduces the effective edge of weak learners by a quantity controlled by the KL-divergence of the projection. We prove an exponential-loss bound where the convergence rate depends on weak learner edge minus a "fairness cost" term $\delta_t = \sqrt{\mathrm{KL}(w^t \| q^t)/2}$. This directly quantifies the accuracy-fairness tradeoff in boosting dynamics. Experiments on standard benchmarks validate the theoretical predictions and demonstrate competitive fairness-accuracy tradeoffs with stable training curves.
带公平性约束的投影提升:量化公平训练分布的成本 / Projected Boosting with Fairness Constraints: Quantifying the Cost of Fair Training Distributions
这篇论文提出了一种名为FairBoost的新方法,它在保持提升算法理论分析能力的同时,通过将训练分布投影到满足公平性约束的集合上,来量化并控制追求公平性所导致的模型准确率下降成本。
源自 arXiv: 2602.05713