菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-30
📄 Abstract - AdaBFL: Multi-Layer Defensive Adaptive Aggregation for Bzantine-Robust Federated Learning

Federated learning (FL) is a popular distributed learning paradigm in machine learning, which enables multiple clients to collaboratively train models under the guidance of a server without exposing private client data. However, FL's decentralized nature makes it vulnerable to poisoning attacks, where malicious clients can submit corrupted models to manipulate the system. To counter such attacks, although various Byzantine-robust methods have been proposed, these methods struggle to provide balanced defense against multiple types of attacks or rely on possessing the dataset in the server. To deal with these drawbacks, thus, we propose an effective multi-layer defensive adaptive aggregation for Bzantine-robust federated learning (AdaBFL) based on a novel three-layer defensive mechanism, which can adaptively adjust the weights of defense algorithms to counter complex attacks. Moreover, we provide convergence properties of our AdaBFL method under the non-convex setting on non-iid data. Comprehensive experiments across multiple datasets validate the superiority of our AdaBFL over the comparable algorithms.

顶级标签: machine learning
详细标签: federated learning byzantine-robust adaptive aggregation poisoning attacks non-convex convergence 或 搜索:

AdaBFL:面向拜占庭鲁棒联邦学习的多层防御自适应聚合方法 / AdaBFL: Multi-Layer Defensive Adaptive Aggregation for Bzantine-Robust Federated Learning


1️⃣ 一句话总结

本文提出了一种名为AdaBFL的多层自适应防御聚合方法,通过动态调整三层防御机制的权重,有效抵御联邦学习中多种类型的投毒攻击,并在非独立同分布数据上保证了收敛性,相较于现有方法显著提升了鲁棒性。

源自 arXiv: 2604.27434