菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - HiLoRA: Hierarchical Low-Rank Adaptation for Personalized Federated Learning

Vision Transformers (ViTs) have been widely adopted in vision tasks due to their strong transferability. In Federated Learning (FL), where full fine-tuning is communication heavy, Low-Rank Adaptation (LoRA) provides an efficient and communication-friendly way to adapt ViTs. However, existing LoRA-based federated tuning methods overlook latent client structures in real-world settings, limiting shared representation learning and hindering effective adaptation to unseen clients. To address this, we propose HiLoRA, a hierarchical LoRA framework that places adapters at three levels: root, cluster, and leaf, each designed to capture global, subgroup, and client-specific knowledge, respectively. Through cross-tier orthogonality and cascaded optimization, HiLoRA separates update subspaces and aligns each tier with its residual personalized objective. In particular, we develop a LoRA-Subspace Adaptive Clustering mechanism that infers latent client groups via subspace similarity analysis, thereby facilitating knowledge sharing across structurally aligned clients. Theoretically, we establish a tier-wise generalization analysis that supports HiLoRA's design. Experiments on ViT backbones with CIFAR-100 and DomainNet demonstrate consistent improvements in both personalization and generalization.

顶级标签: model training systems machine learning
详细标签: federated learning parameter-efficient fine-tuning vision transformers personalization low-rank adaptation 或 搜索:

HiLoRA:用于个性化联邦学习的层次化低秩自适应方法 / HiLoRA: Hierarchical Low-Rank Adaptation for Personalized Federated Learning


1️⃣ 一句话总结

这篇论文提出了一种名为HiLoRA的层次化低秩自适应框架,通过在根、簇、叶三个层级部署适配器,分别学习全局、子组和客户端特定知识,从而在联邦学习中更有效地实现视觉Transformer模型的个性化与泛化能力提升。

源自 arXiv: 2603.02785