📄
Abstract - ImplicitBBQ: Benchmarking Implicit Bias in Large Language Models through Characteristic Based Cues
Large Language Models increasingly suppress biased outputs when demographic identity is stated explicitly, yet may still exhibit implicit biases when identity is conveyed indirectly. Existing benchmarks use name based proxies to detect implicit biases, which carry weak associations with many social demographics and cannot extend to dimensions like age or socioeconomic status. We introduce ImplicitBBQ, a QA benchmark that evaluates implicit bias through characteristic based cues, culturally associated attributes that signal implicitly, across age, gender, region, religion, caste, and socioeconomic status. Evaluating 11 models, we find that implicit bias in ambiguous contexts is over six times higher than explicit bias in open weight models. Safety prompting and chain-of-thought reasoning fail to substantially close this gap; even few-shot prompting, which reduces implicit bias by 84%, leaves caste bias at four times the level of any other dimension. These findings indicate that current alignment and prompting strategies address the surface of bias evaluation while leaving culturally grounded stereotypic associations largely unresolved. We publicly release our code and dataset for model providers and researchers to benchmark potential mitigation techniques.
ImplicitBBQ:基于特征线索评估大语言模型中的隐性偏见 /
ImplicitBBQ: Benchmarking Implicit Bias in Large Language Models through Characteristic Based Cues
1️⃣ 一句话总结
这篇论文提出了一个名为ImplicitBBQ的新评测基准,它通过文化特征线索(而非姓名)来系统评估大语言模型在年龄、性别、地域、宗教、种姓和社会经济地位等多个维度上的隐性偏见,发现当前模型在模糊语境下的隐性偏见远高于显性偏见,且现有的安全对齐和提示策略难以有效消除这些根植于文化的刻板联想。