菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-10
📄 Abstract - Influencing LLM Multi-Agent Dialogue via Policy-Parameterized Prompts

Large Language Models (LLMs) have emerged as a new paradigm for multi-agent systems. However, existing research on the behaviour of LLM-based multi-agents relies on ad hoc prompts and lacks a principled policy perspective. Different from reinforcement learning, we investigate whether prompt-as-action can be parameterized so as to construct a lightweight policy which consists of a sequence of state-action pairs to influence conversational behaviours without training. Our framework regards prompts as actions executed by LLMs, and dynamically constructs prompts through five components based on the current state of the agent. To test the effectiveness of parameterized control, we evaluated the dialogue flow based on five indicators: responsiveness, rebuttal, evidence usage, non-repetition, and stance shift. We conduct experiments using different LLM-driven agents in two discussion scenarios related to the general public and show that prompt parameterization can influence the dialogue dynamics. This result shows that policy-parameterised prompts offer a simple and effective mechanism to influence the dialogue process, which will help the research of multi-agent systems in the direction of social simulation.

顶级标签: llm multi-agents agents
详细标签: policy parameterization prompt engineering dialogue control social simulation multi-agent systems 或 搜索:

通过策略参数化提示影响大语言模型多智能体对话 / Influencing LLM Multi-Agent Dialogue via Policy-Parameterized Prompts


1️⃣ 一句话总结

这篇论文提出了一种无需训练、通过将提示参数化为轻量级策略来系统性地引导和控制多个大语言模型智能体之间对话行为的新方法,并在公共讨论场景中验证了其有效性。

源自 arXiv: 2603.09890