这项人类研究并未涉及人类受试者:验证大语言模型模拟作为行为证据的有效性 / This human study did not involve human subjects: Validating LLM simulations as behavioral evidence
1️⃣ 一句话总结
这篇论文探讨了如何有效利用大语言模型模拟人类行为进行社会科学研究,对比了启发式修正和统计校准两种策略的适用场景与前提假设,并指出关键在于模型能否准确代表目标人群,而非简单地用模型替代人类被试。
A growing literature uses large language models (LLMs) as synthetic participants to generate cost-effective and nearly instantaneous responses in social science experiments. However, there is limited guidance on when such simulations support valid inference about human behavior. We contrast two strategies for obtaining valid estimates of causal effects and clarify the assumptions under which each is suitable for exploratory versus confirmatory research. Heuristic approaches seek to establish that simulated and observed human behavior are interchangeable through prompt engineering, model fine-tuning, and other repair strategies designed to reduce LLM-induced inaccuracies. While useful for many exploratory tasks, heuristic approaches lack the formal statistical guarantees typically required for confirmatory research. In contrast, statistical calibration combines auxiliary human data with statistical adjustments to account for discrepancies between observed and simulated responses. Under explicit assumptions, statistical calibration preserves validity and provides more precise estimates of causal effects at lower cost than experiments that rely solely on human participants. Yet the potential of both approaches depends on how well LLMs approximate the relevant populations. We consider what opportunities are overlooked when researchers focus myopically on substituting LLMs for human participants in a study.
这项人类研究并未涉及人类受试者:验证大语言模型模拟作为行为证据的有效性 / This human study did not involve human subjects: Validating LLM simulations as behavioral evidence
这篇论文探讨了如何有效利用大语言模型模拟人类行为进行社会科学研究,对比了启发式修正和统计校准两种策略的适用场景与前提假设,并指出关键在于模型能否准确代表目标人群,而非简单地用模型替代人类被试。
源自 arXiv: 2602.15785