E-mem:基于多智能体的情景化上下文重建用于大语言模型智能体记忆 / E-mem: Multi-agent based Episodic Context Reconstruction for LLM Agent Memory
1️⃣ 一句话总结
这篇论文提出了一个名为E-mem的新框架,它通过模拟生物记忆的运作方式,让多个AI助手分别保存完整的对话历史片段,并由一个中央主智能体协调,从而在解决复杂问题时能更好地保持逻辑连贯性,同时显著降低了计算成本。
The evolution of Large Language Model (LLM) agents towards System~2 reasoning, characterized by deliberative, high-precision problem-solving, requires maintaining rigorous logical integrity over extended horizons. However, prevalent memory preprocessing paradigms suffer from destructive de-contextualization. By compressing complex sequential dependencies into pre-defined structures (e.g., embeddings or graphs), these methods sever the contextual integrity essential for deep reasoning. To address this, we propose E-mem, a framework shifting from Memory Preprocessing to Episodic Context Reconstruction. Inspired by biological engrams, E-mem employs a heterogeneous hierarchical architecture where multiple assistant agents maintain uncompressed memory contexts, while a central master agent orchestrates global planning. Unlike passive retrieval, our mechanism empowers assistants to locally reason within activated segments, extracting context-aware evidence before aggregation. Evaluations on the LoCoMo benchmark demonstrate that E-mem achieves over 54\% F1, surpassing the state-of-the-art GAM by 7.75\%, while reducing token cost by over 70\%.
E-mem:基于多智能体的情景化上下文重建用于大语言模型智能体记忆 / E-mem: Multi-agent based Episodic Context Reconstruction for LLM Agent Memory
这篇论文提出了一个名为E-mem的新框架,它通过模拟生物记忆的运作方式,让多个AI助手分别保存完整的对话历史片段,并由一个中央主智能体协调,从而在解决复杂问题时能更好地保持逻辑连贯性,同时显著降低了计算成本。
源自 arXiv: 2601.21714