评估大型语言模型行为倾向的一致性 / Evaluating Alignment of Behavioral Dispositions in LLMs
1️⃣ 一句话总结
这篇论文通过将心理学问卷改编为情境判断测试,系统评估了25个大型语言模型在社交场景中的行为倾向,发现它们与人类偏好分布存在显著偏差,例如在人类共识低时过度自信、在高共识时偏离共识,且其宣称的价值观与实际行为之间存在差距。
As LLMs integrate into our daily lives, understanding their behavior becomes essential. In this work, we focus on behavioral dispositions$-$the underlying tendencies that shape responses in social contexts$-$and introduce a framework to study how closely the dispositions expressed by LLMs align with those of humans. Our approach is grounded in established psychological questionnaires but adapts them for LLMs by transforming human self-report statements into Situational Judgment Tests (SJTs). These SJTs assess behavior by eliciting natural recommendations in realistic user-assistant scenarios. We generate 2,500 SJTs, each validated by three human annotators, and collect preferred actions from 10 annotators per SJT, from a large pool of 550 participants. In a comprehensive study involving 25 LLMs, we find that models often do not reflect the distribution of human preferences: (1) in scenarios with low human consensus, LLMs consistently exhibit overconfidence in a single response; (2) when human consensus is high, smaller models deviate significantly, and even some frontier models do not reflect the consensus in 15-20% of cases; (3) traits can exhibit cross-LLM patterns, e.g., LLMs may encourage emotion expression in contexts where human consensus favors composure. Lastly, mapping psychometric statements directly to behavioral scenarios presents a unique opportunity to evaluate the predictive validity of self-reports, revealing considerable gaps between LLMs' stated values and their revealed behavior.
评估大型语言模型行为倾向的一致性 / Evaluating Alignment of Behavioral Dispositions in LLMs
这篇论文通过将心理学问卷改编为情境判断测试,系统评估了25个大型语言模型在社交场景中的行为倾向,发现它们与人类偏好分布存在显著偏差,例如在人类共识低时过度自信、在高共识时偏离共识,且其宣称的价值观与实际行为之间存在差距。
源自 arXiv: 2602.11328