菜单

🤖 系统
📄 Abstract - FedRE: A Representation Entanglement Framework for Model-Heterogeneous Federated Learning

Federated learning (FL) enables collaborative training across clients without compromising privacy. While most existing FL methods assume homogeneous model architectures, client heterogeneity in data and resources renders this assumption impractical, motivating model-heterogeneous FL. To address this problem, we propose Federated Representation Entanglement (FedRE), a framework built upon a novel form of client knowledge termed entangled representation. In FedRE, each client aggregates its local representations into a single entangled representation using normalized random weights and applies the same weights to integrate the corresponding one-hot label encodings into the entangled-label encoding. Those are then uploaded to the server to train a global classifier. During training, each entangled representation is supervised across categories via its entangled-label encoding, while random weights are resampled each round to introduce diversity, mitigating the global classifier's overconfidence and promoting smoother decision boundaries. Furthermore, each client uploads a single cross-category entangled representation along with its entangled-label encoding, mitigating the risk of representation inversion attacks and reducing communication overhead. Extensive experiments demonstrate that FedRE achieves an effective trade-off among model performance, privacy protection, and communication overhead. The codes are available at this https URL.

顶级标签: machine learning systems model training
详细标签: federated learning model heterogeneity privacy protection representation learning communication efficiency 或 搜索:

FedRE:一种面向模型异构联邦学习的表示纠缠框架 / FedRE: A Representation Entanglement Framework for Model-Heterogeneous Federated Learning


1️⃣ 一句话总结

本文提出了一种名为FedRE的新方法,它通过让不同设备(客户端)上传一种混合了多种信息的‘纠缠表示’来训练一个全局模型,从而在保护隐私、降低通信成本的同时,有效解决了联邦学习中各设备模型结构不同所带来的协作难题。


📄 打开原文 PDF