面向大语言模型遗忘的逐参数任务算术 / Per-parameter Task Arithmetic for Unlearning in Large Language Models
1️⃣ 一句话总结
本文提出了一种名为逐参数任务算术的新方法,通过为每个模型参数单独调整权重来精准移除大语言模型中的隐私信息,在有效遗忘的同时更好地保留了模型原有的其他知识,比传统方法更高效且实用。
In large language model (LLM) unlearning, private information is required to be removed. Task arithmetic unlearns by subtracting a specific task vector (TV)--defined as the parameter difference between a privacy-information-tuned model and the original model. While efficient, it can cause over-forgetting by disrupting parameters essential for retaining other information. Motivated by the observation that each parameter exhibits different importance for forgetting versus retention, we propose a per-parameter task arithmetic (PerTA) mechanism to rescale the TV, allowing per-parameter adjustment. These weights quantify the relative importance of each parameter for forgetting versus retention, estimated via gradients (i.e., PerTA-grad) or the diagonal Fisher information approximation (i.e., PerTA-fisher). Moreover, we discuss the effectiveness of PerTA, extend it to a more general form, and provide further analysis. Extensive experiments demonstrate that PerTA consistently improves upon standard TV, and in many cases surpasses widely used training-based unlearning methods in both forgetting effectiveness and overall model utility. By retaining the efficiency of task arithmetic while mitigating over-forgetting, PerTA offers a principled and practical framework for LLM unlearning.
面向大语言模型遗忘的逐参数任务算术 / Per-parameter Task Arithmetic for Unlearning in Large Language Models
本文提出了一种名为逐参数任务算术的新方法,通过为每个模型参数单独调整权重来精准移除大语言模型中的隐私信息,在有效遗忘的同时更好地保留了模型原有的其他知识,比传统方法更高效且实用。
源自 arXiv: 2601.22030