菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-02
📄 Abstract - Memory Bank Compression for Continual Adaptation of Large Language Models

Large Language Models (LLMs) have become a mainstay for many everyday applications. However, as data evolve their knowledge quickly becomes outdated. Continual learning aims to update LLMs with new information without erasing previously acquired knowledge. Although methods such as full fine-tuning can incorporate new data, they are computationally expensive and prone to catastrophic forgetting, where prior knowledge is overwritten. Memory-augmented approaches address this by equipping LLMs with a memory bank, that is an external memory module which stores information for future use. However, these methods face a critical limitation, in particular, the memory bank constantly grows in the real-world scenario when large-scale data streams arrive. In this paper, we propose MBC, a model that compresses the memory bank through a codebook optimization strategy during online adaptation learning. To ensure stable learning, we also introduce an online resetting mechanism that prevents codebook collapse. In addition, we employ Key-Value Low-Rank Adaptation in the attention layers of the LLM, enabling efficient utilization of the compressed memory representations. Experiments with benchmark question-answering datasets demonstrate that MBC reduces the memory bank size to 0.3% when compared against the most competitive baseline, while maintaining high retention accuracy during online adaptation learning. Our code is publicly available at this https URL.

顶级标签: llm model training systems
详细标签: continual learning memory compression online adaptation catastrophic forgetting memory-augmented llms 或 搜索:

面向大语言模型持续适应的记忆库压缩方法 / Memory Bank Compression for Continual Adaptation of Large Language Models


1️⃣ 一句话总结

本文提出了一种名为MBC的新方法,它通过一种创新的压缩和优化策略,极大地缩小了外部记忆库的规模,从而让大语言模型在持续学习新知识时,既能高效更新、防止遗忘旧知识,又不会因数据不断涌入而导致存储开销无限增长。

源自 arXiv: 2601.00756