TraceMem:从用户对话痕迹中编织叙事记忆图式 / TraceMem: Weaving Narrative Memory Schemata from User Conversational Traces
1️⃣ 一句话总结
这篇论文提出了一个名为TraceMem的新框架,它模仿人脑的记忆机制,能够从用户与AI的长期对话中自动整理和组织出连贯的‘故事线’,从而显著提升大语言模型对复杂、长期对话的理解和记忆能力。
Sustaining long-term interactions remains a bottleneck for Large Language Models (LLMs), as their limited context windows struggle to manage dialogue histories that extend over time. Existing memory systems often treat interactions as disjointed snippets, failing to capture the underlying narrative coherence of the dialogue stream. We propose TraceMem, a cognitively-inspired framework that weaves structured, narrative memory schemata from user conversational traces through a three-stage pipeline: (1) Short-term Memory Processing, which employs a deductive topic segmentation approach to demarcate episode boundaries and extract semantic representation; (2) Synaptic Memory Consolidation, a process that summarizes episodes into episodic memories before distilling them alongside semantics into user-specific traces; and (3) Systems Memory Consolidation, which utilizes two-stage hierarchical clustering to organize these traces into coherent, time-evolving narrative threads under unifying themes. These threads are encapsulated into structured user memory cards, forming narrative memory schemata. For memory utilization, we provide an agentic search mechanism to enhance reasoning process. Evaluation on the LoCoMo benchmark shows that TraceMem achieves state-of-the-art performance with a brain-inspired architecture. Analysis shows that by constructing coherent narratives, it surpasses baselines in multi-hop and temporal reasoning, underscoring its essential role in deep narrative comprehension. Additionally, we provide an open discussion on memory systems, offering our perspectives and future outlook on the field. Our code implementation is available at: this https URL
TraceMem:从用户对话痕迹中编织叙事记忆图式 / TraceMem: Weaving Narrative Memory Schemata from User Conversational Traces
这篇论文提出了一个名为TraceMem的新框架,它模仿人脑的记忆机制,能够从用户与AI的长期对话中自动整理和组织出连贯的‘故事线’,从而显著提升大语言模型对复杂、长期对话的理解和记忆能力。
源自 arXiv: 2602.09712