菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-14
📄 Abstract - The AI Hippocampus: How Far are We From Human Memory?

Memory plays a foundational role in augmenting the reasoning, adaptability, and contextual fidelity of modern Large Language Models and Multi-Modal LLMs. As these models transition from static predictors to interactive systems capable of continual learning and personalized inference, the incorporation of memory mechanisms has emerged as a central theme in their architectural and functional evolution. This survey presents a comprehensive and structured synthesis of memory in LLMs and MLLMs, organizing the literature into a cohesive taxonomy comprising implicit, explicit, and agentic memory paradigms. Specifically, the survey delineates three primary memory frameworks. Implicit memory refers to the knowledge embedded within the internal parameters of pre-trained transformers, encompassing their capacity for memorization, associative retrieval, and contextual reasoning. Recent work has explored methods to interpret, manipulate, and reconfigure this latent memory. Explicit memory involves external storage and retrieval components designed to augment model outputs with dynamic, queryable knowledge representations, such as textual corpora, dense vectors, and graph-based structures, thereby enabling scalable and updatable interaction with information sources. Agentic memory introduces persistent, temporally extended memory structures within autonomous agents, facilitating long-term planning, self-consistency, and collaborative behavior in multi-agent systems, with relevance to embodied and interactive AI. Extending beyond text, the survey examines the integration of memory within multi-modal settings, where coherence across vision, language, audio, and action modalities is essential. Key architectural advances, benchmark tasks, and open challenges are discussed, including issues related to memory capacity, alignment, factual consistency, and cross-system interoperability.

顶级标签: llm agents multi-modal
详细标签: memory mechanisms large language models survey continual learning multi-agent systems 或 搜索:

AI海马体:我们距离人类记忆还有多远? / The AI Hippocampus: How Far are We From Human Memory?


1️⃣ 一句话总结

这篇综述论文系统地梳理了大型语言模型和多模态大模型中的记忆机制,将其分为内隐、外显和智能体记忆三大类,并探讨了这些机制如何提升模型的推理、适应和交互能力,以及当前面临的主要挑战。

源自 arXiv: 2601.09113