菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - The Hrunting of AI: Where and How to Improve English Dialectal Fairness

It is known that large language models (LLMs) underperform in English dialects, and that improving them is difficult due to data scarcity. In this work we investigate how quality and availability impact the feasibility of improving LLMs in this context. For this, we evaluate three rarely-studied English dialects (Yorkshire, Geordie, and Cornish), plus African-American Vernacular English, and West Frisian as control. We find that human-human agreement when determining LLM generation quality directly impacts LLM-as-a-judge performance. That is, LLM-human agreement mimics the human-human agreement pattern, and so do metrics such as accuracy. It is an issue because LLM-human agreement measures an LLM's alignment with the human consensus; and hence raises questions about the feasibility of improving LLM performance in locales where low populations induce low agreement. We also note that fine-tuning does not eradicate, and might amplify, this pattern in English dialects. But also find encouraging signals, such as some LLMs' ability to generate high-quality data, thus enabling scalability. We argue that data must be carefully evaluated to ensure fair and inclusive LLM improvement; and, in the presence of scarcity, new tools are needed to handle the pattern found.

顶级标签: llm natural language processing model evaluation
详细标签: dialectal fairness evaluation data scarcity fine-tuning human-llm agreement 或 搜索:

AI的探索:如何提升英语方言的公平性 / The Hrunting of AI: Where and How to Improve English Dialectal Fairness


1️⃣ 一句话总结

这篇论文发现,由于使用人数少、数据稀缺,大语言模型在少数英语方言上表现不佳,而且改进起来很困难,因为人类对这些方言的评判标准本身就存在分歧,导致模型难以学习;不过研究也发现,某些模型能生成高质量的方言数据,这为未来的改进提供了可能。

源自 arXiv: 2603.15187