通过轮次交错保护技术实现联邦学习中隐私-质量-效率的平衡 / Balancing Privacy-Quality-Efficiency in Federated Learning through Round-Based Interleaving of Protection Techniques
1️⃣ 一句话总结
这篇论文提出了一个名为Alt-FL的新框架,通过巧妙地在不同训练轮次中交替使用差分隐私、同态加密和合成数据这三种技术,来同时兼顾联邦学习中的隐私保护、模型质量和系统效率。
In federated learning (FL), balancing privacy protection, learning quality, and efficiency remains a challenge. Privacy protection mechanisms, such as Differential Privacy (DP), degrade learning quality, or, as in the case of Homomorphic Encryption (HE), incur substantial system overhead. To address this, we propose Alt-FL, a privacy-preserving FL framework that combines DP, HE, and synthetic data via a novel round-based interleaving strategy. Alt-FL introduces three new methods, Privacy Interleaving (PI), Synthetic Interleaving with DP (SI/DP), and Synthetic Interleaving with HE (SI/HE), that enable flexible quality-efficiency trade-offs while providing privacy protection. We systematically evaluate Alt-FL against representative reconstruction attacks, including Deep Leakage from Gradients, Inverting Gradients, When the Curious Abandon Honesty, and Robbing the Fed, using a LeNet-5 model on CIFAR-10 and Fashion-MNIST. To enable fair comparison between DP- and HE-based defenses, we introduce a new attacker-centric framework that compares empirical attack success rates across the three proposed interleaving methods. Our results show that, for the studied attacker model and dataset, PI achieves the most balanced trade-offs at high privacy protection levels, while DP-based methods are preferable at intermediate privacy requirements. We also discuss how such results can be the basis for selecting privacy-preserving FL methods under varying privacy and resource constraints.
通过轮次交错保护技术实现联邦学习中隐私-质量-效率的平衡 / Balancing Privacy-Quality-Efficiency in Federated Learning through Round-Based Interleaving of Protection Techniques
这篇论文提出了一个名为Alt-FL的新框架,通过巧妙地在不同训练轮次中交替使用差分隐私、同态加密和合成数据这三种技术,来同时兼顾联邦学习中的隐私保护、模型质量和系统效率。
源自 arXiv: 2603.05158