菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-23
📄 Abstract - Lost in Simulation: LLM-Simulated Users are Unreliable Proxies for Human Users in Agentic Evaluations

Agentic benchmarks increasingly rely on LLM-simulated users to scalably evaluate agent performance, yet the robustness, validity, and fairness of this approach remain unexamined. Through a user study with participants across the United States, India, Kenya, and Nigeria, we investigate whether LLM-simulated users serve as reliable proxies for real human users in evaluating agents on {\tau}-Bench retail tasks. We find that user simulation lacks robustness, with agent success rates varying up to 9 percentage points across different user LLMs. Furthermore, evaluations using simulated users exhibit systematic miscalibration, underestimating agent performance on challenging tasks and overestimating it on moderately difficult ones. African American Vernacular English (AAVE) speakers experience consistently worse success rates and calibration errors than Standard American English (SAE) speakers, with disparities compounding significantly with age. We also find simulated users to be a differentially effective proxy for different populations, performing worst for AAVE and Indian English speakers. Additionally, simulated users introduce conversational artifacts and surface different failure patterns than human users. These findings demonstrate that current evaluation practices risk misrepresenting agent capabilities across diverse user populations and may obscure real-world deployment challenges.

顶级标签: llm agents benchmark
详细标签: agent evaluation user simulation evaluation bias human-ai interaction robustness 或 搜索:

迷失在模拟中:LLM模拟用户在智能体评估中并非人类用户的可靠代理 / Lost in Simulation: LLM-Simulated Users are Unreliable Proxies for Human Users in Agentic Evaluations


1️⃣ 一句话总结

这篇论文通过跨国用户研究发现,用大语言模型模拟用户来评估AI助手性能并不可靠,它会错误估计AI的真实能力,并且对不同语言和文化背景的人群存在系统性偏差,可能导致评估结果失真。

源自 arXiv: 2601.17087