📄
Abstract - Multi-Layered Memory Architectures for LLM Agents: An Experimental Evaluation of Long-Term Context Retention
Long-horizon dialogue systems suffer from semanticdrift and unstable memory retention across extended sessions. This paper presents a Multi-Layer Memory Framework that decomposes dialogue history into working, episodic, and semantic layers with adaptive retrieval gating and retention regularization. The architecture controls cross-session drift while maintaining bounded context growth and computational efficiency. Experiments on LOCOMO, LOCCO, and LoCoMo show improved performance, achieving 46.85 Success Rate, 0.618 overall F1 with 0.594 multi-hop F1, and 56.90% six-period retention while reducing false memory rate to 5.1% and context usage to 58.40%. Results confirm enhanced long-term retention and reasoning stability under constrained context budgets.
面向大语言模型智能体的多层记忆架构:长期上下文保持能力的实验评估 /
Multi-Layered Memory Architectures for LLM Agents: An Experimental Evaluation of Long-Term Context Retention
1️⃣ 一句话总结
这篇论文提出了一种多层记忆框架,通过将对话历史分解为工作、情景和语义三层,并采用自适应检索与保留机制,有效解决了长对话中信息漂移和遗忘的问题,在有限的计算资源下显著提升了智能体的长期记忆保持和推理稳定性。