菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-16
📄 Abstract - No More Guessing: a Verifiable Gradient Inversion Attack in Federated Learning

Gradient inversion attacks threaten client privacy in federated learning by reconstructing training samples from clients' shared gradients. Gradients aggregate contributions from multiple records and existing attacks may fail to disentangle them, yielding incorrect reconstructions with no intrinsic way to certify success. In vision and language, attackers may fall back on human inspection to judge reconstruction plausibility, but this is far less feasible for numerical tabular records, fueling the impression that tabular data is less vulnerable. We challenge this perception by proposing a verifiable gradient inversion attack (VGIA) that provides an explicit certificate of correctness for reconstructed samples. Our method adopts a geometric view of ReLU leakage: the activation boundary of a fully connected layer defines a hyperplane in input space. VGIA introduces an algebraic, subspace-based verification test that detects when a hyperplane-delimited region contains exactly one record. Once isolation is certified, VGIA recovers the corresponding feature vector analytically and reconstructs the target via a lightweight optimization step. Experiments on tabular benchmarks with large batch sizes demonstrate exact record and target recovery in regimes where existing state-of-the-art attacks either fail or cannot assess reconstruction fidelity. Compared to prior geometric approaches, VGIA allocates hyperplane queries more effectively, yielding faster reconstructions with fewer attack rounds.

顶级标签: machine learning systems model training
详细标签: federated learning privacy attacks gradient inversion verifiable attack tabular data 或 搜索:

不再猜测:一种联邦学习中可验证的梯度反演攻击 / No More Guessing: a Verifiable Gradient Inversion Attack in Federated Learning


1️⃣ 一句话总结

本文提出了一种可验证的梯度反演攻击方法,通过几何分析和代数验证,能够准确恢复联邦学习中客户端的表格数据,并首次提供了重建结果正确性的明确证明,挑战了表格数据不易受攻击的固有认知。

源自 arXiv: 2604.15063