菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-18
📄 Abstract - MemoryArena: Benchmarking Agent Memory in Interdependent Multi-Session Agentic Tasks

Existing evaluations of agents with memory typically assess memorization and action in isolation. One class of benchmarks evaluates memorization by testing recall of past conversations or text but fails to capture how memory is used to guide future decisions. Another class focuses on agents acting in single-session tasks without the need for long-term memory. However, in realistic settings, memorization and action are tightly coupled: agents acquire memory while interacting with the environment, and subsequently rely on that memory to solve future tasks. To capture this setting, we introduce MemoryArena, a unified evaluation gym for benchmarking agent memory in multi-session Memory-Agent-Environment loops. The benchmark consists of human-crafted agentic tasks with explicitly interdependent subtasks, where agents must learn from earlier actions and feedback by distilling experiences into memory, and subsequently use that memory to guide later actions to solve the overall task. MemoryArena supports evaluation across web navigation, preference-constrained planning, progressive information search, and sequential formal reasoning, and reveals that agents with near-saturated performance on existing long-context memory benchmarks like LoCoMo perform poorly in our agentic setting, exposing a gap in current evaluations for agents with memory.

顶级标签: agents benchmark model evaluation
详细标签: agent memory multi-session tasks evaluation framework memory-action coupling interdependent subtasks 或 搜索:

MemoryArena:在相互依赖的多轮次智能体任务中对智能体记忆进行基准测试 / MemoryArena: Benchmarking Agent Memory in Interdependent Multi-Session Agentic Tasks


1️⃣ 一句话总结

这篇论文提出了一个名为MemoryArena的新基准测试平台,用于评估智能体在需要长期记忆和行动决策相互依赖的多轮次任务中的实际表现,揭示了现有记忆测试的不足。

源自 arXiv: 2602.16313