LogAct:通过共享日志实现智能体的可靠运行 / LogAct: Enabling Agentic Reliability via Shared Logs
1️⃣ 一句话总结
这篇论文提出了一个名为LogAct的新框架,它通过一个共享日志来管理和协调由大语言模型驱动的智能体,让它们的行动在执行前可以被审查和阻止,并在发生故障时能自动、一致地恢复,从而大幅提升了智能体系统的可靠性和可控性。
Agents are LLM-driven components that can mutate environments in powerful, arbitrary ways. Extracting guarantees for the execution of agents in production environments can be challenging due to asynchrony and failures. In this paper, we propose a new abstraction called LogAct, where each agent is a deconstructed state machine playing a shared log. In LogAct, agentic actions are visible in the shared log before they are executed; can be stopped prior to execution by pluggable, decoupled voters; and recovered consistently in the case of agent or environment failure. LogAct enables agentic introspection, allowing the agent to analyze its own execution history using LLM inference, which in turn enables semantic variants of recovery, health check, and optimization. In our evaluation, LogAct agents recover efficiently and correctly from failures; debug their own performance; optimize token usage in swarms; and stop all unwanted actions for a target model on a representative benchmark with just a 3% drop in benign utility.
LogAct:通过共享日志实现智能体的可靠运行 / LogAct: Enabling Agentic Reliability via Shared Logs
这篇论文提出了一个名为LogAct的新框架,它通过一个共享日志来管理和协调由大语言模型驱动的智能体,让它们的行动在执行前可以被审查和阻止,并在发生故障时能自动、一致地恢复,从而大幅提升了智能体系统的可靠性和可控性。
源自 arXiv: 2604.07988