菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-25
📄 Abstract - HEART-PFL: Stable Personalized Federated Learning under Heterogeneity with Hierarchical Directional Alignment and Adversarial Knowledge Transfer

Personalized Federated Learning (PFL) aims to deliver effective client-specific models under heterogeneous distributions, yet existing methods suffer from shallow prototype alignment and brittle server-side distillation. We propose HEART-PFL, a dual-sided framework that (i) performs depth-aware Hierarchical Directional Alignment (HDA) using cosine similarity in the early stage and MSE matching in the deep stage to preserve client specificity, and (ii) stabilizes global updates through Adversarial Knowledge Transfer (AKT) with symmetric KL distillation on clean and adversarial proxy data. Using lightweight adapters with only 1.46M trainable parameters, HEART-PFL achieves state-of-the-art personalized accuracy on CIFAR-100, Flowers-102, and Caltech-101 (63.42%, 84.23%, and 95.67%, respectively) under Dirichlet non-IID partitions, and remains robust to out-of-domain proxy data. Ablation studies further confirm that HDA and AKT provide complementary gains in alignment, robustness, and optimization stability, offering insights into how the two components mutually reinforce effective personalization. Overall, these results demonstrate that HEART-PFL simultaneously enhances personalization and global stability, highlighting its potential as a strong and scalable solution for PFL(code available at this https URL).

顶级标签: machine learning systems model training
详细标签: federated learning personalization heterogeneous data adversarial training knowledge distillation 或 搜索:

HEART-PFL:一种在数据异构环境下通过分层方向对齐与对抗知识迁移实现稳定个性化联邦学习的框架 / HEART-PFL: Stable Personalized Federated Learning under Heterogeneity with Hierarchical Directional Alignment and Adversarial Knowledge Transfer


1️⃣ 一句话总结

这篇论文提出了一个名为HEART-PFL的新框架,它通过分层方向对齐和对抗知识迁移两种技术,在保护用户数据隐私的联邦学习场景下,有效解决了因数据分布不均导致的模型训练不稳定问题,从而为每个用户训练出更精准、更稳定的个性化模型。

源自 arXiv: 2603.24209