菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-09
📄 Abstract - Distilling Feedback into Memory-as-a-Tool

We propose a framework that amortizes the cost of inference-time reasoning by converting transient critiques into retrievable guidelines, through a file-based memory system and agent-controlled tool calls. We evaluate this method on the Rubric Feedback Bench, a novel dataset for rubric-based learning. Experiments demonstrate that our augmented LLMs rapidly match the performance of test-time refinement pipelines while drastically reducing inference cost.

顶级标签: llm agents model training
详细标签: inference-time reasoning feedback distillation memory systems tool usage cost reduction 或 搜索:

将反馈提炼为记忆即工具 / Distilling Feedback into Memory-as-a-Tool


1️⃣ 一句话总结

这篇论文提出了一种新方法,通过将一次性的反馈评论转化为可存储和检索的指导规则,让大型语言模型在后续任务中能快速达到与多次精细调优相当的效果,同时大幅降低了计算成本。

源自 arXiv: 2601.05960