菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-19
📄 Abstract - When Differential Privacy Meets Wireless Federated Learning: An Improved Analysis for Privacy and Convergence

Differentially private wireless federated learning (DPWFL) is a promising framework for protecting sensitive user data. However, foundational questions on how to precisely characterize privacy loss remain open, and existing work is further limited by convergence analyses that rely on restrictive convexity assumptions or ignore the effect of gradient clipping. To overcome these issues, we present a comprehensive analysis of privacy and convergence for DPWFL with general smooth non-convex loss objectives. Our analysis explicitly incorporates both device selection and mini-batch sampling, and shows that the privacy loss can converge to a constant rather than diverge with the number of iterations. Moreover, we establish convergence guarantees with gradient clipping and derive an explicit privacy-utility trade-off. Numerical results validate our theoretical findings.

顶级标签: machine learning systems theory
详细标签: differential privacy federated learning wireless networks convergence analysis privacy-utility trade-off 或 搜索:

当差分隐私遇见无线联邦学习:隐私与收敛性的改进分析 / When Differential Privacy Meets Wireless Federated Learning: An Improved Analysis for Privacy and Convergence


1️⃣ 一句话总结

这项研究为无线联邦学习中的差分隐私保护提供了更精确的分析框架,证明了在非凸优化和梯度裁剪等实际条件下,隐私损失不会无限累积,并明确了隐私保护与模型性能之间的权衡关系。

源自 arXiv: 2603.19040