菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-25
📄 Abstract - Private and Robust Contribution Evaluation in Federated Learning

Cross-silo federated learning allows multiple organizations to collaboratively train machine learning models without sharing raw data, but client updates can still leak sensitive information through inference attacks. Secure aggregation protects privacy by hiding individual updates, yet it complicates contribution evaluation, which is critical for fair rewards and detecting low-quality or malicious participants. Existing marginal-contribution methods, such as the Shapley value, are incompatible with secure aggregation, and practical alternatives, such as Leave-One-Out, are crude and rely on self-evaluation. We introduce two marginal-difference contribution scores compatible with secure aggregation. Fair-Private satisfies standard fairness axioms, while Everybody-Else eliminates self-evaluation and provides resistance to manipulation, addressing a largely overlooked vulnerability. We provide theoretical guarantees for fairness, privacy, robustness, and computational efficiency, and evaluate our methods on multiple medical image datasets and CIFAR10 in cross-silo settings. Our scores consistently outperform existing baselines, better approximate Shapley-induced client rankings, and improve downstream model performance as well as misbehavior detection. These results demonstrate that fairness, privacy, robustness, and practical utility can be achieved jointly in federated contribution evaluation, offering a principled solution for real-world cross-silo deployments.

顶级标签: machine learning systems model evaluation
详细标签: federated learning contribution evaluation privacy secure aggregation fairness 或 搜索:

联邦学习中隐私且鲁棒的贡献度评估 / Private and Robust Contribution Evaluation in Federated Learning


1️⃣ 一句话总结

这篇论文提出了两种新的贡献度评估方法,能在保护用户隐私的联邦学习环境中,公平、安全且高效地衡量各参与方的贡献,并有效抵御恶意行为。

源自 arXiv: 2602.21721