并非所有信任都相同:人机协同决策中决策流程与解释的影响 / Not All Trust is the Same: Effects of Decision Workflow and Explanations in Human-AI Decision Making
1️⃣ 一句话总结
这项研究发现,在人机协同决策中,改变决策流程(如先让用户自己判断再展示AI建议)并不能有效减少对AI的过度依赖,而用户的主观信任报告与实际依赖行为是两回事,且解释功能的效果会因用户专业知识和决策流程的不同而变化。
A central challenge in AI-assisted decision making is achieving warranted, well-calibrated trust. Both overtrust (accepting incorrect AI recommendations) and undertrust (rejecting correct advice) should be prevented. Prior studies differ in the design of the decision workflow - whether users see the AI suggestion immediately (1-step setup) or have to submit a first decision beforehand (2-step setup) -, and in how trust is measured - through self-reports or as behavioral trust, that is, reliance. We examined the effects and interactions of (a) the type of decision workflow, (b) the presence of explanations, and (c) users' domain knowledge and prior AI experience. We compared reported trust, reliance (agreement rate and switch rate), and overreliance. Results showed no evidence that a 2-step setup reduces overreliance. The decision workflow also did not directly affect self-reported trust, but there was a crossover interaction effect with domain knowledge and explanations, suggesting that the effects of explanations alone may not generalize across workflow setups. Finally, our findings confirm that reported trust and reliance behavior are distinct constructs that should be evaluated separately in AI-assisted decision making.
并非所有信任都相同:人机协同决策中决策流程与解释的影响 / Not All Trust is the Same: Effects of Decision Workflow and Explanations in Human-AI Decision Making
这项研究发现,在人机协同决策中,改变决策流程(如先让用户自己判断再展示AI建议)并不能有效减少对AI的过度依赖,而用户的主观信任报告与实际依赖行为是两回事,且解释功能的效果会因用户专业知识和决策流程的不同而变化。
源自 arXiv: 2603.05229