菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-06
📄 Abstract - From Use to Oversight: How Mental Models Influence User Behavior and Output in AI Writing Assistants

AI-based writing assistants are ubiquitous, yet little is known about how users' mental models shape their use. We examine two types of mental models -- functional or related to what the system does, and structural or related to how the system works -- and how they affect control behavior -- how users request, accept, or edit AI suggestions as they write -- and writing outcomes. We primed participants ($N = 48$) with different system descriptions to induce these mental models before asking them to complete a cover letter writing task using a writing assistant that occasionally offered preconfigured ungrammatical suggestions to test whether the mental models affected participants' critical oversight. We find that while participants in the structural mental model condition demonstrate a better understanding of the system, this can have a backfiring effect: while these participants judged the system as more usable, they also produced letters with more grammatical errors, highlighting a complex relationship between system understanding, trust, and control in contexts that require user oversight of error-prone AI outputs.

顶级标签: llm natural language processing model evaluation
详细标签: mental models user behavior ai writing assistants human-ai interaction oversight 或 搜索:

从使用到监督:心智模型如何影响用户在AI写作助手上的行为与输出 / From Use to Oversight: How Mental Models Influence User Behavior and Output in AI Writing Assistants


1️⃣ 一句话总结

这项研究发现,用户对AI写作助手工作原理的理解越深,虽然会感觉它更好用,但反而可能因为过度信任而减少对错误建议的审查,导致最终写出更多语法错误的文本。

源自 arXiv: 2604.05166