重审纵向联邦学习中的标签推断攻击:其脆弱性根源与防御方法 / Revisiting Label Inference Attacks in Vertical Federated Learning: Why They Are Vulnerable and How to Defend
1️⃣ 一句话总结
这篇论文揭示了纵向联邦学习中标签推断攻击的根本弱点,指出其成功主要依赖特征与标签的分布对齐,而非模型本身的学习能力,并据此提出了一种通过调整模型分层结构来增强防御的零开销方法。
Vertical federated learning (VFL) allows an active party with a top model, and multiple passive parties with bottom models to collaborate. In this scenario, passive parties possessing only features may attempt to infer active party's private labels, making label inference attacks (LIAs) a significant threat. Previous LIA studies have claimed that well-trained bottom models can effectively represent labels. However, we demonstrate that this view is misleading and exposes the vulnerability of existing LIAs. By leveraging mutual information, we present the first observation of the "model compensation" phenomenon in VFL. We theoretically prove that, in VFL, the mutual information between layer outputs and labels increases with layer depth, indicating that bottom models primarily extract feature information while the top model handles label mapping. Building on this insight, we introduce task reassignment to show that the success of existing LIAs actually stems from the distribution alignment between features and labels. When this alignment is disrupted, the performance of LIAs declines sharply or even fails entirely. Furthermore, the implications of this insight for defenses are also investigated. We propose a zero-overhead defense technique based on layer adjustment. Extensive experiments across five datasets and five representative model architectures indicate that shifting cut layers forward to increase the proportion of top model layers in the entire model not only improves resistance to LIAs but also enhances other defenses.
重审纵向联邦学习中的标签推断攻击:其脆弱性根源与防御方法 / Revisiting Label Inference Attacks in Vertical Federated Learning: Why They Are Vulnerable and How to Defend
这篇论文揭示了纵向联邦学习中标签推断攻击的根本弱点,指出其成功主要依赖特征与标签的分布对齐,而非模型本身的学习能力,并据此提出了一种通过调整模型分层结构来增强防御的零开销方法。
源自 arXiv: 2603.18680