诊断LLM智能体记忆中的检索与利用瓶颈 / Diagnosing Retrieval vs. Utilization Bottlenecks in LLM Agent Memory
1️⃣ 一句话总结
这篇论文通过一个诊断框架发现,对于具备记忆功能的大语言模型智能体,其性能瓶颈主要在于如何从记忆中检索信息,而非如何存储信息,因此提升检索质量比优化存储策略更有效。
Memory-augmented LLM agents store and retrieve information from prior interactions, yet the relative importance of how memories are written versus how they are retrieved remains unclear. We introduce a diagnostic framework that analyzes how performance differences manifest across write strategies, retrieval methods, and memory utilization behavior, and apply it to a 3x3 study crossing three write strategies (raw chunks, Mem0-style fact extraction, MemGPT-style summarization) with three retrieval methods (cosine, BM25, hybrid reranking). On LoCoMo, retrieval method is the dominant factor: average accuracy spans 20 points across retrieval methods (57.1% to 77.2%) but only 3-8 points across write strategies. Raw chunked storage, which requires zero LLM calls, matches or outperforms expensive lossy alternatives, suggesting that current memory pipelines may discard useful context that downstream retrieval mechanisms fail to compensate for. Failure analysis shows that performance breakdowns most often manifest at the retrieval stage rather than at utilization. We argue that, under current retrieval practices, improving retrieval quality yields larger gains than increasing write-time sophistication. Code is publicly available at this https URL.
诊断LLM智能体记忆中的检索与利用瓶颈 / Diagnosing Retrieval vs. Utilization Bottlenecks in LLM Agent Memory
这篇论文通过一个诊断框架发现,对于具备记忆功能的大语言模型智能体,其性能瓶颈主要在于如何从记忆中检索信息,而非如何存储信息,因此提升检索质量比优化存储策略更有效。
源自 arXiv: 2603.02473