菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-17
📄 Abstract - FedPSA: Modeling Behavioral Staleness in Asynchronous Federated Learning

Asynchronous Federated Learning (AFL) has emerged as a significant research area in recent years. By not waiting for slower clients and executing the training process concurrently, it achieves faster training speed compared to traditional federated learning. However, due to the staleness introduced by the asynchronous process, its performance may degrade in some scenarios. Existing methods often use the round difference between the current model and the global model as the sole measure of staleness, which is coarse-grained and lacks observation of the model itself, thereby limiting the performance ceiling of asynchronous methods. In this paper, we propose FedPSA (Parameter Sensitivity-based Asynchronous Federated Learning), a more fine-grained AFL framework that leverages parameter sensitivity to measure model obsolescence and establishes a dynamic momentum queue to assess the current training phase in real time, thereby adjusting the tolerance for outdated information dynamically. Extensive experiments on multiple datasets and comparisons with various methods demonstrate the superior performance of FedPSA, achieving up to 6.37\% improvement over baseline methods and 1.93\% over the current state-of-the-art method.

顶级标签: systems model training machine learning
详细标签: federated learning asynchronous training staleness parameter sensitivity momentum queue 或 搜索:

FedPSA:异步联邦学习中的行为陈旧性建模 / FedPSA: Modeling Behavioral Staleness in Asynchronous Federated Learning


1️⃣ 一句话总结

这篇论文提出了一种名为FedPSA的新方法,它通过分析模型参数的变化敏感度来更精准地衡量异步联邦学习中的信息陈旧程度,并动态调整学习策略,从而显著提升了训练模型的最终性能。

源自 arXiv: 2602.15337