人机协同控制LLM辅助计算机科学教育中的目标漂移 / Human-in-the-Loop Control of Objective Drift in LLM-Assisted Computer Science Education
1️⃣ 一句话总结
这篇论文提出了一种人机协同的教学方法,通过训练学生在使用AI编程工具前明确任务目标和验收标准,来有效控制AI输出偏离原定目标的‘漂移’问题,并设计了包含故意引入漂移的实验室课程来培养学生的诊断和纠错能力。
Large language models (LLMs) are increasingly embedded in computer science education through AI-assisted programming tools, yet such workflows often exhibit objective drift, in which locally plausible outputs diverge from stated task specifications. Existing instructional responses frequently emphasize tool-specific prompting practices, limiting durability as AI platforms evolve. This paper adopts a human-centered stance, treating human-in-the-loop (HITL) control as a stable educational problem rather than a transitional step toward AI autonomy. Drawing on systems engineering and control-theoretic concepts, we frame objectives and world models as operational artifacts that students configure to stabilize AI-assisted work. We propose a pilot undergraduate CS laboratory curriculum that explicitly separates planning from execution and trains students to specify acceptance criteria and architectural constraints prior to code generation. In selected labs, the curriculum also introduces deliberate, concept-aligned drift to support diagnosis and recovery from specification violations. We report a sensitivity power analysis for a three-arm pilot design comparing unstructured AI use, structured planning, and structured planning with injected drift, establishing detectable effect sizes under realistic section-level constraints. The contribution is a theory-driven, methodologically explicit foundation for HITL pedagogy that renders control competencies teachable across evolving AI tools.
人机协同控制LLM辅助计算机科学教育中的目标漂移 / Human-in-the-Loop Control of Objective Drift in LLM-Assisted Computer Science Education
这篇论文提出了一种人机协同的教学方法,通过训练学生在使用AI编程工具前明确任务目标和验收标准,来有效控制AI输出偏离原定目标的‘漂移’问题,并设计了包含故意引入漂移的实验室课程来培养学生的诊断和纠错能力。
源自 arXiv: 2604.00281