菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-28
📄 Abstract - Ceci n'est pas une explication: Evaluating Explanation Failures as Explainability Pitfalls in Language Learning Systems

AI-powered language learning tools increasingly provide instant, personalised feedback to millions of learners worldwide. However, this feedback can fail in ways that are difficult for learners--and even teachers--to detect, potentially reinforcing misconceptions and eroding learning outcomes over extended use. We present a portion of L2-Bench, a benchmark for evaluating AI systems in language education that includes (but is not limited to) six critical dimensions of effective feedback: diagnostic accuracy, awareness of appropriacy, causes of error, prioritisation, guidance for improvement, and supporting self-regulation. We analyse how AI systems can fail with respect to these dimensions. These failures, which we argue are conducive to "explainability pitfalls," are AI-generated explanations that appear helpful on the surface but are fundamentally flawed, increasing the risk of attainment, human-AI interaction, and socioaffective harms. We discuss how the specific context of language learning amplifies these risks and outline open questions we believe merit more attention when designing evaluation frameworks specifically. Our analysis aims to expand the community's understanding of both the typology of explainability pitfalls and the contextual dynamics in which they may occur in order to encourage AI developers to better design safe, trustworthy, and effective AI explanations.

顶级标签: natural language processing llm evaluation
详细标签: explainability language learning benchmark feedback evaluation ai safety 或 搜索:

这不是解释:评估语言学习系统中作为可解释性陷阱的解释失败 / Ceci n'est pas une explication: Evaluating Explanation Failures as Explainability Pitfalls in Language Learning Systems


1️⃣ 一句话总结

本文提出了一套评估AI语言学习反馈质量的六维度基准(L2-Bench),并系统分析了AI生成的表面合理但实则错误的解释如何成为“可解释性陷阱”,这些陷阱可能加剧学习者的误解、削弱人机信任并造成情感伤害。

源自 arXiv: 2604.26145