菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-06
📄 Abstract - From Parameter Dynamics to Risk Scoring : Quantifying Sample-Level Safety Degradation in LLM Fine-tuning

Safety alignment of Large Language Models (LLMs) is extremely fragile, as fine-tuning on a small number of benign samples can erase safety behaviors learned from millions of preference examples. Existing studies attempt to explain this phenomenon by comparing parameters and hidden states before and after fine-tuning, but overlook their dynamic evolution during fine-tuning. In this paper, we uncover a critical mechanism underlying safety degradation by analyzing parameter dynamics, where benign fine-tuning causes parameters to cumulatively drift toward danger-aligned directions, progressively undermining the model's safety. This finding suggests that samples contributing more to this drift has greater fine-tuning risks. Based on this insight, we propose a method of Sample-Level Quantification of Safety Degradation (SQSD), which quantifies the influence of each training sample on safety degradation. Specifically, SQSD computes continuous risk scores to samples by measuring their induced parameter updates' projection difference between danger and safety directions. Extensive experiments across multiple models and datasets demonstrate that SQSD effectively quantifies sample-level fine-tuning risks and exhibits strong transferability across model architectures, parameter scales, and parameter-efficient methods.

顶级标签: llm machine learning model training
详细标签: safety alignment fine-tuning parameter dynamics risk quantification transferability 或 搜索:

从参数动态到风险评分:量化大语言模型微调中的样本级安全退化 / From Parameter Dynamics to Risk Scoring : Quantifying Sample-Level Safety Degradation in LLM Fine-tuning


1️⃣ 一句话总结

本文通过分析大语言模型微调过程中参数的动态变化,发现即使是少量良性样本也可能使模型参数向危险方向偏移,从而破坏其安全对齐,并据此提出一种名为SQSD的方法,能够为每个训练样本计算连续的风险评分,量化其对模型安全退化的贡献。

源自 arXiv: 2605.04572