揭示时序数据填补模型中的记忆效应:LBRM成员推理攻击及其与属性泄露的关联 / Uncovering Memorization in Timeseries Imputation models: LBRM Membership Inference and its link to attribute Leakage
1️⃣ 一句话总结
这篇论文发现,用于填补缺失数据的时序深度学习模型存在隐私泄露风险,并提出了一种新的攻击方法,不仅能判断特定数据是否用于训练模型,还能推测出训练数据中的敏感信息。
Deep learning models for time series imputation are now essential in fields such as healthcare, the Internet of Things (IoT), and finance. However, their deployment raises critical privacy concerns. Beyond the well-known issue of unintended memorization, which has been extensively studied in generative models, we demonstrate that time series models are vulnerable to inference attacks in a black-box setting. In this work, we introduce a two-stage attack framework comprising: (1) a novel membership inference attack based on a reference model that improves detection accuracy, even for models robust to overfitting-based attacks, and (2) the first attribute inference attack that predicts sensitive characteristics of the training data for timeseries imputation model. We evaluate these attacks on attention-based and autoencoder architectures in two scenarios: models that are trained from scratch, and fine-tuned models where the adversary has access to the initial weights. Our experimental results demonstrate that the proposed membership attack retrieves a significant portion of the training data with a tpr@top25% score significantly higher than a naive attack baseline. We show that our membership attack also provides a good insight of whether attribute inference will work (with a precision of 90% instead of 78% in the genral case).
揭示时序数据填补模型中的记忆效应:LBRM成员推理攻击及其与属性泄露的关联 / Uncovering Memorization in Timeseries Imputation models: LBRM Membership Inference and its link to attribute Leakage
这篇论文发现,用于填补缺失数据的时序深度学习模型存在隐私泄露风险,并提出了一种新的攻击方法,不仅能判断特定数据是否用于训练模型,还能推测出训练数据中的敏感信息。
源自 arXiv: 2603.24213