菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-19
📄 Abstract - Act While Thinking: Accelerating LLM Agents via Pattern-Aware Speculative Tool Execution

LLM-powered agents are emerging as a dominant paradigm for autonomous task solving. Unlike standard inference workloads, agents operate in a strictly serial "LLM-tool" loop, where the LLM must wait for external tool execution at every step. This execution model introduces severe latency bottlenecks. To address this problem, we propose PASTE, a Pattern-Aware Speculative Tool Execution method designed to hide tool latency through speculation. PASTE is based on the insight that although agent requests are semantically diverse, they exhibit stable application level control flows (recurring tool-call sequences) and predictable data dependencies (parameter passing between tools). By exploiting these properties, PASTE improves agent serving performance through speculative tool execution. Experimental results against state of the art baselines show that PASTE reduces average task completion time by 48.5% and improves tool execution throughput by 1.8x.

顶级标签: llm agents systems
详细标签: speculative execution tool latency agent acceleration control flow performance optimization 或 搜索:

边思考边行动:通过模式感知的推测式工具执行加速大语言模型智能体 / Act While Thinking: Accelerating LLM Agents via Pattern-Aware Speculative Tool Execution


1️⃣ 一句话总结

这篇论文提出了一种名为PASTE的新方法,它通过提前预测并执行大语言模型智能体下一步可能调用的外部工具,来大幅减少智能体在等待工具返回结果时的空闲时间,从而将任务完成时间平均缩短了近一半。

源自 arXiv: 2603.18897