📄
Abstract - Rethinking Anonymity Claims in Synthetic Data Generation: A Model-Centric Privacy Attack Perspective
Training generative machine learning models to produce synthetic tabular data has become a popular approach for enhancing privacy in data sharing. As this typically involves processing sensitive personal information, releasing either the trained model or generated synthetic datasets can still pose privacy risks. Yet, recent research, commercial deployments, and privacy regulations like the General Data Protection Regulation (GDPR) largely assess anonymity at the level of an individual dataset. In this paper, we rethink anonymity claims about synthetic data from a model-centric perspective and argue that meaningful assessments must account for the capabilities and properties of the underlying generative model and be grounded in state-of-the-art privacy attacks. This perspective better reflects real-world products and deployments, where trained models are often readily accessible for interaction or querying. We interpret the GDPR's definitions of personal data and anonymization under such access assumptions to identify the types of identifiability risks that must be mitigated and map them to privacy attacks across different threat settings. We then argue that synthetic data techniques alone do not ensure sufficient anonymization. Finally, we compare the two mechanisms most commonly used alongside synthetic data -- Differential Privacy (DP) and Similarity-based Privacy Metrics (SBPMs) -- and argue that while DP can offer robust protections against identifiability risks, SBPMs lack adequate safeguards. Overall, our work connects regulatory notions of identifiability with model-centric privacy attacks, enabling more responsible and trustworthy regulatory assessment of synthetic data systems by researchers, practitioners, and policymakers.
重新审视合成数据生成中的匿名性主张:一种以模型为中心的隐私攻击视角 /
Rethinking Anonymity Claims in Synthetic Data Generation: A Model-Centric Privacy Attack Perspective
1️⃣ 一句话总结
这篇论文认为,评估合成数据的匿名性不能只看生成的数据集本身,而必须考虑背后生成模型的能力和潜在隐私攻击,并指出仅靠合成数据技术本身不足以保障匿名性,相比之下差分隐私比基于相似性的隐私度量能提供更可靠的保护。