在学生团队对话中定位机械推理行为:一种可解释的机器学习方法 / Locating acts of mechanistic reasoning in student team conversations with mechanistic machine learning
1️⃣ 一句话总结
本文提出了一种可解释的机器学习模型,能够从学生小组讨论的对话中自动识别出他们进行机械推理的时刻,并通过引入领域特定的归纳偏差来提高模型的泛化能力,从而帮助STEM教育研究者更高效地分析教学互动数据。
STEM education researchers are often interested in identifying moments of students' mechanistic reasoning for deeper analysis, but have limited capacity to search through many team conversation transcripts to find segments with a high concentration of such reasoning. We offer a solution in the form of an interpretable machine learning model that outputs time-varying probabilities that individual students are engaging in acts of mechanistic reasoning, leveraging evidence from their own utterances as well as contributions from the rest of the group. Using the toolkit of intentionally-designed probabilistic models, we introduce a specific inductive bias that steers the probabilistic dynamics toward desired, domain-aligned behavior. Experiments compare trained models with and without the inductive bias components, investigating whether their presence improves the desired model behavior on transcripts involving never-before-seen students and a novel discussion context. Our results show that the inductive bias improves generalization -- supporting the claim that interpretability is built into the model for this task rather than imposed post hoc. We conclude with practical recommendations for STEM education researchers seeking to adopt the tool and for ML researchers aiming to extend the model's design. Overall, we hope this work encourages the development of mechanistically interpretable models that are understandable and controllable for both end users and model designers in STEM education research.
在学生团队对话中定位机械推理行为:一种可解释的机器学习方法 / Locating acts of mechanistic reasoning in student team conversations with mechanistic machine learning
本文提出了一种可解释的机器学习模型,能够从学生小组讨论的对话中自动识别出他们进行机械推理的时刻,并通过引入领域特定的归纳偏差来提高模型的泛化能力,从而帮助STEM教育研究者更高效地分析教学互动数据。
源自 arXiv: 2604.21870