生成式AI用户体验:发展人-AI认知伙伴关系 / Generative AI User Experience: Developing Human--AI Epistemic Partnership
1️⃣ 一句话总结
这篇论文提出了一个名为‘人-AI认知伙伴关系’的新理论,认为用户与生成式AI(如ChatGPT)的互动更像是一个共同构建知识的动态伙伴关系,而非简单的工具使用,这有助于解释为什么人们在使用AI时既信任又怀疑、既合作又感到责任归属模糊等复杂体验。
Generative AI (GenAI) has rapidly entered education, yet its user experience is often explained through adoption-oriented constructs such as usefulness, ease of use, and engagement. We argue that these constructs are no longer sufficient because systems such as ChatGPT do not merely support learning tasks but also participate in knowledge construction. Existing theories cannot explain why GenAI frequently produces experiences characterized by negotiated authority, redistributed cognition, and accountability tension. To address this gap, this paper develops the Human--AI Epistemic Partnership Theory (HAEPT), explaining the GenAI user experience as a form of epistemic partnership that features a dynamic negotiation of three interlocking contracts: epistemic, agency, and accountability. We argue that findings on trust, over-reliance, academic integrity, teacher caution, and relational interaction about GenAI can be reinterpreted as tensions within these contracts rather than as isolated issues. Instead of holding a single, stable view of GenAI, users adjust how they relate to it over time through calibration cycles. These repeated interactions account for why trust and skepticism often coexist and for how partnership modes describe recurrent configurations of human--AI collaboration across tasks. To demonstrate the usefulness of HAEPT, we applied it to analyze the UX of collaborative learning with AI speakers and AI-facilitated scientific argumentation, illustrating different contract configurations.
生成式AI用户体验:发展人-AI认知伙伴关系 / Generative AI User Experience: Developing Human--AI Epistemic Partnership
这篇论文提出了一个名为‘人-AI认知伙伴关系’的新理论,认为用户与生成式AI(如ChatGPT)的互动更像是一个共同构建知识的动态伙伴关系,而非简单的工具使用,这有助于解释为什么人们在使用AI时既信任又怀疑、既合作又感到责任归属模糊等复杂体验。
源自 arXiv: 2603.23863