菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-17
📄 Abstract - Fractional-Order Federated Learning

Federated learning (FL) allows remote clients to train a global model collaboratively while protecting client privacy. Despite its privacy-preserving benefits, FL has significant drawbacks, including slow convergence, high communication cost, and non-independent-and-identically-distributed (non-IID) data. In this work, we present a novel FedAvg variation called Fractional-Order Federated Averaging (FOFedAvg), which incorporates Fractional-Order Stochastic Gradient Descent (FOSGD) to capture long-range relationships and deeper historical information. By introducing memory-aware fractional-order updates, FOFedAvg improves communication efficiency and accelerates convergence while mitigating instability caused by heterogeneous, non-IID client data. We compare FOFedAvg against a broad set of established federated optimization algorithms on benchmark datasets including MNIST, FEMNIST, CIFAR-10, CIFAR-100, EMNIST, the Cleveland heart disease dataset, Sent140, PneumoniaMNIST, and Edge-IIoTset. Across a range of non-IID partitioning schemes, FOFedAvg is competitive with, and often outperforms, these baselines in terms of test performance and convergence speed. On the theoretical side, we prove that FOFedAvg converges to a stationary point under standard smoothness and bounded-variance assumptions for fractional order $0<\alpha\le 1$. Together, these results show that fractional-order, memory-aware updates can substantially improve the robustness and effectiveness of federated learning, offering a practical path toward distributed training on heterogeneous data.

顶级标签: machine learning systems model training
详细标签: federated learning fractional-order optimization non-iid data convergence analysis communication efficiency 或 搜索:

分数阶联邦学习 / Fractional-Order Federated Learning


1️⃣ 一句话总结

这篇论文提出了一种名为FOFedAvg的新方法,它通过引入分数阶梯度下降来利用历史信息,有效解决了传统联邦学习中收敛慢、通信成本高以及数据分布不均的问题,从而在多种数据集上实现了更快的收敛和更好的性能。

源自 arXiv: 2602.15380