菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-24
📄 Abstract - MemCollab: Cross-Agent Memory Collaboration via Contrastive Trajectory Distillation

Large language model (LLM)-based agents rely on memory mechanisms to reuse knowledge from past problem-solving experiences. Existing approaches typically construct memory in a per-agent manner, tightly coupling stored knowledge to a single model's reasoning style. In modern deployments with heterogeneous agents, a natural question arises: can a single memory system be shared across different models? We found that naively transferring memory between agents often degrades performance, as such memory entangles task-relevant knowledge with agent-specific biases. To address this challenge, we propose MemCollab, a collaborative memory framework that constructs agent-agnostic memory by contrasting reasoning trajectories generated by different agents on the same task. This contrastive process distills abstract reasoning constraints that capture shared task-level invariants while suppressing agent-specific artifacts. We further introduce a task-aware retrieval mechanism that conditions memory access on task category, ensuring that only relevant constraints are used at inference time. Experiments on mathematical reasoning and code generation benchmarks demonstrate that MemCollab consistently improves both accuracy and inference-time efficiency across diverse agents, including cross-modal-family settings. Our results show that the collaboratively constructed memory can function as a shared reasoning resource for diverse LLM-based agents.

顶级标签: llm agents systems
详细标签: memory collaboration contrastive learning knowledge distillation reasoning trajectories multi-agent systems 或 搜索:

MemCollab:基于对比轨迹蒸馏的跨智能体记忆协作 / MemCollab: Cross-Agent Memory Collaboration via Contrastive Trajectory Distillation


1️⃣ 一句话总结

这篇论文提出了一个名为MemCollab的新方法,它能让不同的大语言模型智能体共享同一个记忆库,通过对比它们解决问题的思路来提炼出通用的解题规则,从而提升各种智能体的解题准确率和效率。

源自 arXiv: 2603.23234