位置未找到:揭示多语言大模型中的隐式本地与全局偏见 / Location Not Found: Exposing Implicit Local and Global Biases in Multilingual LLMs
1️⃣ 一句话总结
这项研究通过构建一个包含12种语言、2156个地域模糊问题的测试集(LocQA),系统性地发现多语言大模型存在两种结构性偏见:全球层面偏向美国相关答案(且指令微调会加剧这种偏见),以及同一语言内偏向人口较多的地区。
Multilingual large language models (LLMs) have minimized the fluency gap between languages. This advancement, however, exposes models to the risk of biased behavior, as knowledge and norms may propagate across languages. In this work, we aim to quantify models' inter- and intra-lingual biases, via their ability to answer locale-ambiguous questions. To this end, we present LocQA, a test set containing 2,156 questions in 12 languages, referring to various locale-dependent facts such as laws, dates, and measurements. The questions do not contain indications of the locales they relate to, other than the querying language itself. LLMs' responses to LocQA locale-ambiguous questions thus reveal models' implicit priors. We used LocQA to evaluate 32 models, and detected two types of structural biases. Inter-lingually, we show a global bias towards answers relevant to the US-locale, even when models are asked in languages other than English. Moreover, we discovered that this global bias is exacerbated in models that underwent instruction tuning, compared to their base counterparts. Intra-lingually, we show that when multiple locales are relevant for the same language, models act as demographic probability engines, prioritizing locales with larger populations. Taken together, insights from LocQA may help in shaping LLMs' desired local behavior, and in quantifying the impact of various training phases on different kinds of biases.
位置未找到:揭示多语言大模型中的隐式本地与全局偏见 / Location Not Found: Exposing Implicit Local and Global Biases in Multilingual LLMs
这项研究通过构建一个包含12种语言、2156个地域模糊问题的测试集(LocQA),系统性地发现多语言大模型存在两种结构性偏见:全球层面偏向美国相关答案(且指令微调会加剧这种偏见),以及同一语言内偏向人口较多的地区。
源自 arXiv: 2604.19292