重访权重正则化在低秩持续学习中的应用 / Revisiting Weight Regularization for Low-Rank Continual Learning
1️⃣ 一句话总结
这篇论文提出了一种名为EWC-LoRA的新方法,通过将经典的权重正则化技术应用于低秩适配器,有效解决了大规模预训练模型在持续学习中的任务干扰问题,同时保持了存储和计算开销的恒定。
Continual Learning (CL) with large-scale pre-trained models (PTMs) has recently gained wide attention, shifting the focus from training from scratch to continually adapting PTMs. This has given rise to a promising paradigm: parameter-efficient continual learning (PECL), where task interference is typically mitigated by assigning a task-specific module during training, such as low-rank adapters. However, weight regularization techniques, such as Elastic Weight Consolidation (EWC)-a key strategy in CL-remain underexplored in this new paradigm. In this paper, we revisit weight regularization in low-rank CL as a new perspective for mitigating task interference in PECL. Unlike existing low-rank CL methods, we mitigate task interference by regularizing a shared low-rank update through EWC, thereby keeping the storage requirement and inference costs constant regardless of the number of tasks. Our proposed method EWC-LoRA leverages a low-rank representation to estimate parameter importance over the full-dimensional space. This design offers a practical, computational- and memory-efficient solution for CL with PTMs, and provides insights that may inform the broader application of regularization techniques within PECL. Extensive experiments on various benchmarks demonstrate the effectiveness of EWC-LoRA, achieving a stability-plasticity trade-off superior to existing low-rank CL approaches. These results indicate that, even under low-rank parameterizations, weight regularization remains an effective mechanism for mitigating task interference. Code is available at: this https URL.
重访权重正则化在低秩持续学习中的应用 / Revisiting Weight Regularization for Low-Rank Continual Learning
这篇论文提出了一种名为EWC-LoRA的新方法,通过将经典的权重正则化技术应用于低秩适配器,有效解决了大规模预训练模型在持续学习中的任务干扰问题,同时保持了存储和计算开销的恒定。
源自 arXiv: 2602.17559