📄
Abstract - Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates
Expanding the linguistic diversity of instruct large language models (LLMs) is crucial for global accessibility but is often hindered by the reliance on costly specialized target language labeled data and catastrophic forgetting during adaptation. We tackle this challenge under a realistic, low-resource constraint: adapting instruct LLMs using only unlabeled target language data. We introduce Source-Shielded Updates (SSU), a selective parameter update strategy that proactively preserves source knowledge. Using a small set of source data and a parameter importance scoring method, SSU identifies parameters critical to maintaining source abilities. It then applies a column-wise freezing strategy to protect these parameters before adaptation. Experiments across five typologically diverse languages and 7B and 13B models demonstrate that SSU successfully mitigates catastrophic forgetting. It reduces performance degradation on monolingual source tasks to just 3.4% (7B) and 2.8% (13B) on average, a stark contrast to the 20.3% and 22.3% from full fine-tuning. SSU also achieves target-language performance highly competitive with full fine-tuning, outperforming it on all benchmarks for 7B models and the majority for 13B models.
通过源语言屏蔽更新缓解大语言模型目标语言适应中的灾难性遗忘 /
Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates
1️⃣ 一句话总结
这篇论文提出了一种名为‘源语言屏蔽更新’的新方法,它通过智能地识别并保护大语言模型中与源语言能力相关的关键参数,在仅使用无标注目标语言数据进行模型适应时,有效防止了模型忘记原有知识,同时保持了在新语言上的优秀表现。