面向联邦学习的表征对齐多尺度个性化方法 / Representation-Aligned Multi-Scale Personalization for Federated Learning
1️⃣ 一句话总结
本文提出了一个名为FRAMP的联邦学习新框架,它能够根据每个参与设备的数据特点和计算能力,自动生成量身定制的轻量化模型,并通过对齐不同模型学到的知识表征来保证整体学习效果,从而在资源受限的多样化设备上实现更好的个性化与泛化性能。
In federated learning (FL), accommodating clients with diverse resource constraints remains a significant challenge. A widely adopted approach is to use a shared full-size model, from which each client extracts a submodel aligned with its computational budget. However, regardless of the specific scoring strategy, these methods rely on the same global backbone, limiting both structural diversity and representational adaptation across clients. This paper presents FRAMP, a unified framework for personalized and resource-adaptive federated learning. Instead of relying on a fixed global model, FRAMP generates client-specific models from compact client descriptors, enabling fine-grained adaptation to both data characteristics and computational budgets. Each client trains a tailored lightweight submodel and aligns its learned representation with others to maintain global semantic consistency. Extensive experiments on vision and graph benchmarks demonstrate that FRAMP enhances generalization and adaptivity across a wide range of client settings.
面向联邦学习的表征对齐多尺度个性化方法 / Representation-Aligned Multi-Scale Personalization for Federated Learning
本文提出了一个名为FRAMP的联邦学习新框架,它能够根据每个参与设备的数据特点和计算能力,自动生成量身定制的轻量化模型,并通过对齐不同模型学到的知识表征来保证整体学习效果,从而在资源受限的多样化设备上实现更好的个性化与泛化性能。
源自 arXiv: 2604.11278