基于鲁棒签名的语言模型不可伪造水印 / Unforgeable Watermarks for Language Models via Robust Signatures
1️⃣ 一句话总结
这篇论文提出了一种新型的、基于鲁棒数字签名的语言模型水印技术,不仅能像传统水印一样检测AI生成文本,还能有效防止他人伪造水印进行虚假归属,并能追溯生成内容的原始来源,从而为AI生成内容提供了更强的所有权保护和可追溯性。
Language models now routinely produce text that is difficult to distinguish from human writing, raising the need for robust tools to verify content provenance. Watermarking has emerged as a promising countermeasure, with existing work largely focused on model quality preservation and robust detection. However, current schemes provide limited protection against false attribution. We strengthen the notion of soundness by introducing two novel guarantees: unforgeability and recoverability. Unforgeability prevents adversaries from crafting false positives, texts that are far from any output from the watermarked model but are nonetheless flagged as watermarked. Recoverability provides an additional layer of protection: whenever a watermark is detected, the detector identifies the source text from which the flagged content was derived. Together, these properties strengthen content ownership by linking content exclusively to its generating model, enabling secure attribution and fine-grained traceability. We construct the first undetectable watermarking scheme that is robust, unforgeable, and recoverable with respect to substitutions (i.e., perturbations in Hamming metric). The key technical ingredient is a new cryptographic primitive called robust (or recoverable) digital signatures, which allow verification of messages that are close to signed ones, while preventing forgery of messages that are far from all previously signed messages. We show that any standard digital signature scheme can be boosted to a robust one using property-preserving hash functions (Boyle, LaVigne, and Vaikuntanathan, ITCS 2019).
基于鲁棒签名的语言模型不可伪造水印 / Unforgeable Watermarks for Language Models via Robust Signatures
这篇论文提出了一种新型的、基于鲁棒数字签名的语言模型水印技术,不仅能像传统水印一样检测AI生成文本,还能有效防止他人伪造水印进行虚假归属,并能追溯生成内容的原始来源,从而为AI生成内容提供了更强的所有权保护和可追溯性。
源自 arXiv: 2602.15323