菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-15
📄 Abstract - Secure and Privacy-Preserving Vertical Federated Learning

We propose a novel end-to-end privacy-preserving framework, instantiated by three efficient protocols for different deployment scenarios, covering both input and output privacy, for the vertically split scenario in federated learning (FL), where features are split across clients and labels are not shared by all parties. We do so by distributing the role of the aggregator in FL into multiple servers and having them run secure multiparty computation (MPC) protocols to perform model and feature aggregation and apply differential privacy (DP) to the final released model. While a naive solution would have the clients delegating the entirety of training to run in MPC between the servers, our optimized solution, which supports purely global and also global-local models updates with privacy-preserving, drastically reduces the amount of computation and communication performed using multiparty computation. The experimental results also show the effectiveness of our protocols.

顶级标签: machine learning systems theory
详细标签: federated learning privacy secure multiparty computation differential privacy vertical split 或 搜索:

安全且保护隐私的纵向联邦学习 / Secure and Privacy-Preserving Vertical Federated Learning


1️⃣ 一句话总结

这篇论文提出了一种新的、端到端的隐私保护框架,通过将聚合器的角色分配给多个服务器,并结合安全多方计算与差分隐私技术,在纵向联邦学习场景中高效地保护了数据输入和模型输出的隐私,同时大幅降低了计算和通信开销。

源自 arXiv: 2604.13474