菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-18
📄 Abstract - Calibrate-Then-Act: Cost-Aware Exploration in LLM Agents

LLMs are increasingly being used for complex problems which are not necessarily resolved in a single response, but require interacting with an environment to acquire information. In these scenarios, LLMs must reason about inherent cost-uncertainty tradeoffs in when to stop exploring and commit to an answer. For instance, on a programming task, an LLM should test a generated code snippet if it is uncertain about the correctness of that code; the cost of writing a test is nonzero, but typically lower than the cost of making a mistake. In this work, we show that we can induce LLMs to explicitly reason about balancing these cost-uncertainty tradeoffs, then perform more optimal environment exploration. We formalize multiple tasks, including information retrieval and coding, as sequential decision-making problems under uncertainty. Each problem has latent environment state that can be reasoned about via a prior which is passed to the LLM agent. We introduce a framework called Calibrate-Then-Act (CTA), where we feed the LLM this additional context to enable it to act more optimally. This improvement is preserved even under RL training of both the baseline and CTA. Our results on information-seeking QA and on a simplified coding task show that making cost-benefit tradeoffs explicit with CTA can help agents discover more optimal decision-making strategies.

顶级标签: llm agents model evaluation
详细标签: sequential decision-making cost-benefit tradeoff exploration strategies uncertainty calibration reinforcement learning 或 搜索:

先校准后行动:大语言模型智能体中的成本感知探索 / Calibrate-Then-Act: Cost-Aware Exploration in LLM Agents


1️⃣ 一句话总结

这篇论文提出了一种名为‘先校准后行动’的新方法,通过让大语言模型在执行任务时(如信息检索或编程)明确权衡探索环境的成本与结果的不确定性,从而帮助它们做出更优的决策,比如决定何时停止测试代码并提交最终答案。

源自 arXiv: 2602.16699