FedAFD:基于对抗性融合与蒸馏的多模态联邦学习 / FedAFD: Multimodal Federated Learning via Adversarial Fusion and Distillation
1️⃣ 一句话总结
这篇论文提出了一个名为FedAFD的新框架,它通过客户端上的对抗性对齐与自适应融合,以及服务器上的相似性引导蒸馏,有效解决了多模态联邦学习中因数据、任务和模型差异导致的性能瓶颈,从而在保护隐私的同时,让不同数据类型的设备能协同训练出更强大且个性化的模型。
Multimodal Federated Learning (MFL) enables clients with heterogeneous data modalities to collaboratively train models without sharing raw data, offering a privacy-preserving framework that leverages complementary cross-modal information. However, existing methods often overlook personalized client performance and struggle with modality/task discrepancies, as well as model heterogeneity. To address these challenges, we propose FedAFD, a unified MFL framework that enhances client and server learning. On the client side, we introduce a bi-level adversarial alignment strategy to align local and global representations within and across modalities, mitigating modality and task gaps. We further design a granularity-aware fusion module to integrate global knowledge into the personalized features adaptively. On the server side, to handle model heterogeneity, we propose a similarity-guided ensemble distillation mechanism that aggregates client representations on shared public data based on feature similarity and distills the fused knowledge into the global model. Extensive experiments conducted under both IID and non-IID settings demonstrate that FedAFD achieves superior performance and efficiency for both the client and the server.
FedAFD:基于对抗性融合与蒸馏的多模态联邦学习 / FedAFD: Multimodal Federated Learning via Adversarial Fusion and Distillation
这篇论文提出了一个名为FedAFD的新框架,它通过客户端上的对抗性对齐与自适应融合,以及服务器上的相似性引导蒸馏,有效解决了多模态联邦学习中因数据、任务和模型差异导致的性能瓶颈,从而在保护隐私的同时,让不同数据类型的设备能协同训练出更强大且个性化的模型。
源自 arXiv: 2603.04890