BID-LoRA:一种用于持续学习与机器遗忘的参数高效统一框架 / BID-LoRA: A Parameter-Efficient Framework for Continual Learning and Unlearning
1️⃣ 一句话总结
这篇论文提出了一个名为BID-LoRA的统一框架,它能让AI模型像人一样,既能持续学习新知识,又能精准地遗忘掉旧有的、敏感的信息,同时只更新极少量的参数,从而高效地解决了以往方法中知识泄露和性能下降的问题。
Recent advances in deep learning underscore the need for systems that can not only acquire new knowledge through Continual Learning (CL) but also remove outdated, sensitive, or private information through Machine Unlearning (MU). However, while CL methods are well-developed, MU techniques remain in early stages, creating a critical gap for unified frameworks that depend on both capabilities. We find that naively combining existing CL and MU approaches results in knowledge leakage a gradual degradation of foundational knowledge across repeated adaptation cycles. To address this, we formalize Continual Learning Unlearning (CLU) as a unified paradigm with three key goals: (i) precise deletion of unwanted knowledge, (ii) efficient integration of new knowledge while preserving prior information, and (iii) minimizing knowledge leakage across cycles. We propose Bi-Directional Low-Rank Adaptation (BID-LoRA), a novel framework featuring three dedicated adapter pathways-retain, new, and unlearn applied to attention layers, combined with escape unlearning that pushes forget-class embeddings to positions maximally distant from retained knowledge, updating only 5% of parameters. Experiments on CIFAR-100 show that BID-LoRA outperforms CLU baselines across multiple adaptation cycles. We further evaluate on CASIA-Face100, a curated face recognition subset, demonstrating practical applicability to real-world identity management systems where new users must be enrolled and withdrawn users removed.
BID-LoRA:一种用于持续学习与机器遗忘的参数高效统一框架 / BID-LoRA: A Parameter-Efficient Framework for Continual Learning and Unlearning
这篇论文提出了一个名为BID-LoRA的统一框架,它能让AI模型像人一样,既能持续学习新知识,又能精准地遗忘掉旧有的、敏感的信息,同时只更新极少量的参数,从而高效地解决了以往方法中知识泄露和性能下降的问题。
源自 arXiv: 2604.12686