菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-04
📄 Abstract - Structure-Aware Distributed Backdoor Attacks in Federated Learning

While federated learning protects data privacy, it also makes the model update process vulnerable to long-term stealthy perturbations. Existing studies on backdoor attacks in federated learning mainly focus on trigger design or poisoning strategies, typically assuming that identical perturbations behave similarly across different model architectures. This assumption overlooks the impact of model structure on perturbation effectiveness. From a structure-aware perspective, this paper analyzes the coupling relationship between model architectures and backdoor perturbations. We introduce two metrics, Structural Responsiveness Score (SRS) and Structural Compatibility Coefficient (SCC), to measure a model's sensitivity to perturbations and its preference for fractal perturbations. Based on these metrics, we develop a structure-aware fractal perturbation injection framework (TFI) to study the role of architectural properties in the backdoor injection process. Experimental results show that model architecture significantly influences the propagation and aggregation of perturbations. Networks with multi-path feature fusion can amplify and retain fractal perturbations even under low poisoning ratios, while models with low structural compatibility constrain their effectiveness. Further analysis reveals a strong correlation between SCC and attack success rate, suggesting that SCC can predict perturbation survivability. These findings highlight that backdoor behaviors in federated learning depend not only on perturbation design or poisoning intensity but also on the interaction between model architecture and aggregation mechanisms, offering new insights for structure-aware defense design.

顶级标签: systems machine learning model training
详细标签: federated learning backdoor attacks model architecture adversarial robustness security 或 搜索:

联邦学习中的结构感知分布式后门攻击 / Structure-Aware Distributed Backdoor Attacks in Federated Learning


1️⃣ 一句话总结

这篇论文发现,在联邦学习中,后门攻击的成功与否不仅取决于攻击策略本身,还高度依赖于模型内部结构对扰动的敏感度,并提出了两个量化指标来预测和利用这种结构依赖性,从而为设计更有效的防御方法提供了新思路。

源自 arXiv: 2603.03865