SimpleMem:面向大语言模型智能体的高效终身记忆框架 / SimpleMem: Efficient Lifelong Memory for LLM Agents
1️⃣ 一句话总结
这篇论文提出了一个名为SimpleMem的高效记忆框架,它通过语义无损压缩技术,将智能体过去的交互经验提炼成紧凑且结构化的记忆单元,从而在显著降低计算成本的同时,大幅提升了智能体在长期任务中的准确性和效率。
To support reliable long-term interaction in complex environments, LLM agents require memory systems that efficiently manage historical experiences. Existing approaches either retain full interaction histories via passive context extension, leading to substantial redundancy, or rely on iterative reasoning to filter noise, incurring high token costs. To address this challenge, we introduce SimpleMem, an efficient memory framework based on semantic lossless compression. We propose a three-stage pipeline designed to maximize information density and token utilization: (1) \textit{Semantic Structured Compression}, which applies entropy-aware filtering to distill unstructured interactions into compact, multi-view indexed memory units; (2) \textit{Recursive Memory Consolidation}, an asynchronous process that integrates related units into higher-level abstract representations to reduce redundancy; and (3) \textit{Adaptive Query-Aware Retrieval}, which dynamically adjusts retrieval scope based on query complexity to construct precise context efficiently. Experiments on benchmark datasets show that our method consistently outperforms baseline approaches in accuracy, retrieval efficiency, and inference cost, achieving an average F1 improvement of 26.4% while reducing inference-time token consumption by up to 30-fold, demonstrating a superior balance between performance and efficiency. Code is available at this https URL.
SimpleMem:面向大语言模型智能体的高效终身记忆框架 / SimpleMem: Efficient Lifelong Memory for LLM Agents
这篇论文提出了一个名为SimpleMem的高效记忆框架,它通过语义无损压缩技术,将智能体过去的交互经验提炼成紧凑且结构化的记忆单元,从而在显著降低计算成本的同时,大幅提升了智能体在长期任务中的准确性和效率。
源自 arXiv: 2601.02553