菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-08
📄 Abstract - The Illusion of Stochasticity in LLMs

In this work, we demonstrate that reliable stochastic sampling is a fundamental yet unfulfilled requirement for Large Language Models (LLMs) operating as agents. Agentic systems are frequently required to sample from distributions, often inferred from observed data, a process which needs to be emulated by the LLM. This leads to a distinct failure point: while standard RL agents rely on external sampling mechanisms, LLMs fail to map their internal probability estimates to their stochastic outputs. Through rigorous empirical analysis across multiple model families, model sizes, prompting styles, and distributions, we demonstrate the extent of this failure. Crucially, we show that while powerful frontier models can convert provided random seeds to target distributions, their ability to sample directly from specific distributions is fundamentally flawed.

顶级标签: llm agents model evaluation
详细标签: stochastic sampling agentic systems probability estimation distribution sampling empirical analysis 或 搜索:

大语言模型中随机性的幻觉 / The Illusion of Stochasticity in LLMs


1️⃣ 一句话总结

这篇论文通过实证研究发现,当前的大语言模型在作为智能体工作时,其内在的随机采样能力存在根本性缺陷,无法可靠地根据自身概率估计产生符合特定分布的随机输出,这构成了其作为自主决策系统的一个关键瓶颈。

源自 arXiv: 2604.06543