ProxyFL:一种用于联邦半监督学习的代理引导框架 / ProxyFL: A Proxy-Guided Framework for Federated Semi-Supervised Learning
1️⃣ 一句话总结
这篇论文提出了一个名为ProxyFL的新框架,它通过一个统一的‘代理’来同时解决联邦半监督学习中不同客户端之间以及客户端内部有标签和无标签数据之间的数据差异问题,从而在保护隐私的同时,让更多数据参与训练并提升模型性能。
Federated Semi-Supervised Learning (FSSL) aims to collaboratively train a global model across clients by leveraging partially-annotated local data in a privacy-preserving manner. In FSSL, data heterogeneity is a challenging issue, which exists both across clients and within clients. External heterogeneity refers to the data distribution discrepancy across different clients, while internal heterogeneity represents the mismatch between labeled and unlabeled data within clients. Most FSSL methods typically design fixed or dynamic parameter aggregation strategies to collect client knowledge on the server (external) and / or filter out low-confidence unlabeled samples to reduce mistakes in local client (internal). But, the former is hard to precisely fit the ideal global distribution via direct weights, and the latter results in fewer data participation into FL training. To this end, we propose a proxy-guided framework called ProxyFL that focuses on simultaneously mitigating external and internal heterogeneity via a unified proxy. I.e., we consider the learnable weights of classifier as proxy to simulate the category distribution both locally and globally. For external, we explicitly optimize global proxy against outliers instead of direct weights; for internal, we re-include the discarded samples into training by a positive-negative proxy pool to mitigate the impact of potentially-incorrect pseudo-labels. Insight experiments & theoretical analysis show our significant performance and convergence in FSSL.
ProxyFL:一种用于联邦半监督学习的代理引导框架 / ProxyFL: A Proxy-Guided Framework for Federated Semi-Supervised Learning
这篇论文提出了一个名为ProxyFL的新框架,它通过一个统一的‘代理’来同时解决联邦半监督学习中不同客户端之间以及客户端内部有标签和无标签数据之间的数据差异问题,从而在保护隐私的同时,让更多数据参与训练并提升模型性能。
源自 arXiv: 2602.21078