菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - Breaking the Prototype Bias Loop: Confidence-Aware Federated Contrastive Learning for Highly Imbalanced Clients

Local class imbalance and data heterogeneity across clients often trap prototype-based federated contrastive learning in a prototype bias loop: biased local prototypes induced by imbalanced data are aggregated into biased global prototypes, which are repeatedly reused as contrastive anchors, accumulating errors across communication rounds. To break this loop, we propose Confidence-Aware Federated Contrastive Learning (CAFedCL), a novel framework that improves the prototype aggregation mechanism and strengthens the contrastive alignment guided by prototypes. CAFedCL employs a confidence-aware aggregation mechanism that leverages predictive uncertainty to downweight high-variance local prototypes. In addition, generative augmentation for minority classes and geometric consistency regularization are integrated to stabilize the structure between classes. From a theoretical perspective, we provide an expectation-based analysis showing that our aggregation reduces estimation variance, thereby bounding global prototype drift and ensuring convergence. Extensive experiments under varying levels of class imbalance and data heterogeneity demonstrate that CAFedCL consistently outperforms representative federated baselines in both accuracy and client fairness.

顶级标签: machine learning model training systems
详细标签: federated learning contrastive learning class imbalance prototype bias confidence-aware aggregation 或 搜索:

打破原型偏见循环:面向高度不平衡客户端的置信度感知联邦对比学习 / Breaking the Prototype Bias Loop: Confidence-Aware Federated Contrastive Learning for Highly Imbalanced Clients


1️⃣ 一句话总结

这篇论文提出了一种名为CAFedCL的新方法,通过引入置信度感知的原型聚合和增强对比学习,有效解决了联邦学习中因客户端数据类别不平衡和异构性导致的原型偏见循环问题,从而在提高模型准确性和客户端公平性方面优于现有方法。

源自 arXiv: 2603.03007