论成员推理攻击在版权审计中的证据局限性 / On the Evidentiary Limits of Membership Inference for Copyright Auditing
1️⃣ 一句话总结
这篇论文通过一个对抗性实验证明,当前先进的成员推理攻击方法在面对能保留语义但改变词汇结构的文本改写时表现脆弱,因此仅凭其自身不足以作为大语言模型版权审计的可靠证据。
As large language models (LLMs) are trained on increasingly opaque corpora, membership inference attacks (MIAs) have been proposed to audit whether copyrighted texts were used during training, despite growing concerns about their reliability under realistic conditions. We ask whether MIAs can serve as admissible evidence in adversarial copyright disputes where an accused model developer may obfuscate training data while preserving semantic content, and formalize this setting through a judge-prosecutor-accused communication protocol. To test robustness under this protocol, we introduce SAGE (Structure-Aware SAE-Guided Extraction), a paraphrasing framework guided by Sparse Autoencoders (SAEs) that rewrites training data to alter lexical structure while preserving semantic content and downstream utility. Our experiments show that state-of-the-art MIAs degrade when models are fine-tuned on SAGE-generated paraphrases, indicating that their signals are not robust to semantics-preserving transformations. While some leakage remains in certain fine-tuning regimes, these results suggest that MIAs are brittle in adversarial settings and insufficient, on their own, as a standalone mechanism for copyright auditing of LLMs.
论成员推理攻击在版权审计中的证据局限性 / On the Evidentiary Limits of Membership Inference for Copyright Auditing
这篇论文通过一个对抗性实验证明,当前先进的成员推理攻击方法在面对能保留语义但改变词汇结构的文本改写时表现脆弱,因此仅凭其自身不足以作为大语言模型版权审计的可靠证据。
源自 arXiv: 2601.12937