菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-26
📄 Abstract - Measuring What Matters -- or What's Convenient?: Robustness of LLM-Based Scoring Systems to Construct-Irrelevant Factors

Automated systems have been widely adopted across the educational testing industry for open-response assessment and essay scoring. These systems commonly achieve performance levels comparable to or superior than trained human raters, but have frequently been demonstrated to be vulnerable to the influence of construct-irrelevant factors (i.e., features of responses that are unrelated to the construct assessed) and adversarial conditions. Given the rising usage of large language models in automated scoring systems, there is a renewed focus on ``hallucinations'' and the robustness of these LLM-based automated scoring approaches to construct-irrelevant factors. This study investigates the effects of construct-irrelevant factors on a dual-architecture LLM-based scoring system designed to score short essay-like open-response items in a situational judgment test. It was found that the scoring system was generally robust to padding responses with meaningless text, spelling errors, and writing sophistication. Duplicating large passages of text resulted in lower scores predicted by the system, on average, contradicting results from previous studies of non-LLM-based scoring systems, while off-topic responses were heavily penalized by the scoring system. These results provide encouraging support for the robustness of future LLM-based scoring systems when designed with construct relevance in mind.

顶级标签: llm model evaluation natural language processing
详细标签: automated essay scoring robustness evaluation construct-irrelevant factors adversarial testing educational assessment 或 搜索:

衡量真正重要的,还是衡量方便的?:基于大语言模型的评分系统对“无关因素”的鲁棒性研究 / Measuring What Matters -- or What's Convenient?: Robustness of LLM-Based Scoring Systems to Construct-Irrelevant Factors


1️⃣ 一句话总结

这篇论文研究发现,一个精心设计的基于大语言模型的自动评分系统,在评估短篇论述题时,对无意义的废话、拼写错误和写作风格变化等无关因素表现出较好的鲁棒性,但会惩罚跑题和大量重复文本,这为未来构建更可靠的AI评分工具提供了积极参考。

源自 arXiv: 2603.25674