PrivEraserVerify:高效、隐私保护且可验证的联邦遗忘学习框架 / PrivEraserVerify: Efficient, Private, and Verifiable Federated Unlearning
1️⃣ 一句话总结
这篇论文提出了一个名为PEV的联邦学习遗忘框架,它首次将高效性、隐私保护和结果可验证性结合在一起,让AI模型能快速、安全地‘忘记’特定用户的数据,同时保证模型性能不显著下降。
Federated learning (FL) enables collaborative model training without sharing raw data, offering a promising path toward privacy preserving artificial intelligence. However, FL models may still memorize sensitive information from participants, conflicting with the right to be forgotten (RTBF). To meet these requirements, federated unlearning has emerged as a mechanism to remove the contribution of departing clients. Existing solutions only partially address this challenge: FedEraser improves efficiency but lacks privacy protection, FedRecovery ensures differential privacy (DP) but degrades accuracy, and VeriFi enables verifiability but introduces overhead without efficiency or privacy guarantees. We present PrivEraserVerify (PEV), a unified framework that integrates efficiency, privacy, and verifiability into federated unlearning. PEV employs (i) adaptive checkpointing to retain critical historical updates for fast reconstruction, (ii) layer adaptive differentially private calibration to selectively remove client influence while minimizing accuracy loss, and (iii) fingerprint based verification, enabling participants to confirm unlearning in a decentralized and noninvasive manner. Experiments on image, handwritten character, and medical datasets show that PEV achieves up to 2 to 3 times faster unlearning than retraining, provides formal indistinguishability guarantees with reduced performance degradation, and supports scalable verification. To the best of our knowledge, PEV is the first framework to simultaneously deliver efficiency, privacy, and verifiability for federated unlearning, moving FL closer to practical and regulation compliant deployment.
PrivEraserVerify:高效、隐私保护且可验证的联邦遗忘学习框架 / PrivEraserVerify: Efficient, Private, and Verifiable Federated Unlearning
这篇论文提出了一个名为PEV的联邦学习遗忘框架,它首次将高效性、隐私保护和结果可验证性结合在一起,让AI模型能快速、安全地‘忘记’特定用户的数据,同时保证模型性能不显著下降。
源自 arXiv: 2604.12348