差分隐私对基于发放率的联邦脉冲神经网络敏感性的影响研究 / On the Sensitivity of Firing Rate-Based Federated Spiking Neural Networks to Differential Privacy
1️⃣ 一句话总结
这篇论文研究发现,在联邦脉冲神经网络学习系统中,为保护隐私而加入的差分隐私机制(如梯度裁剪和噪声注入)会显著改变神经元发放率的统计特性,进而影响整个联邦学习过程的协调效率,需要在隐私保护强度和系统性能之间找到平衡点。
Federated Neuromorphic Learning (FNL) enables energy-efficient and privacy-preserving learning on devices without centralizing data. However, real-world deployments require additional privacy mechanisms that can significantly alter training signals. This paper analyzes how Differential Privacy (DP) mechanisms, specifically gradient clipping and noise injection, perturb firing-rate statistics in Spiking Neural Networks (SNNs) and how these perturbations are propagated to rate-based FNL coordination. On a speech recognition task under non-IID settings, ablations across privacy budgets and clipping bounds reveal systematic rate shifts, attenuated aggregation, and ranking instability during client selection. Moreover, we relate these shifts to sparsity and memory indicators. Our findings provide actionable guidance for privacy-preserving FNL, specifically regarding the balance between privacy strength and rate-dependent coordination.
差分隐私对基于发放率的联邦脉冲神经网络敏感性的影响研究 / On the Sensitivity of Firing Rate-Based Federated Spiking Neural Networks to Differential Privacy
这篇论文研究发现,在联邦脉冲神经网络学习系统中,为保护隐私而加入的差分隐私机制(如梯度裁剪和噪声注入)会显著改变神经元发放率的统计特性,进而影响整个联邦学习过程的协调效率,需要在隐私保护强度和系统性能之间找到平衡点。
源自 arXiv: 2602.12009