📄
Abstract - Counterfactual Fairness Evaluation of LLM-Based Contact Center Agent Quality Assurance System
Large Language Models (LLMs) are increasingly deployed in contact-center Quality Assurance (QA) to automate agent performance evaluation and coaching feedback. While LLMs offer unprecedented scalability and speed, their reliance on web-scale training data raises concerns regarding demographic and behavioral biases that may distort workforce assessment. We present a counterfactual fairness evaluation of LLM-based QA systems across 13 dimensions spanning three categories: Identity, Context, and Behavioral Style. Fairness is quantified using the Counterfactual Flip Rate (CFR), the frequency of binary judgment reversals, and the Mean Absolute Score Difference (MASD), the average shift in coaching or confidence scores across counterfactual pairs. Evaluating 18 LLMs on 3,000 real-world contact center transcripts, we find systematic disparities, with CFR ranging from 5.4% to 13.0% and consistent MASD shifts across confidence, positive, and improvement scores. Larger, more strongly aligned models show lower unfairness, though fairness does not track accuracy. Contextual priming of historical performance induces the most severe degradations (CFR up to 16.4%), while implicit linguistic identity cues remain a persistent bias source. Finally, we analyze the efficacy of fairness-aware prompting, finding that explicit instructions yield only modest improvements in evaluative consistency. Our findings underscore the need for standardized fairness auditing pipelines prior to deploying LLMs in high-stakes workforce evaluation.
基于大语言模型的客服中心坐席质量保障系统的反事实公平性评估 /
Counterfactual Fairness Evaluation of LLM-Based Contact Center Agent Quality Assurance System
1️⃣ 一句话总结
这项研究通过反事实公平性评估发现,尽管大语言模型能高效评估客服坐席表现,但其评估结果会因坐席的身份、对话背景和行为风格等特征而产生系统性偏差,且简单的提示词优化难以完全消除这种偏差,因此在关键的人力评估场景中部署前需进行标准化公平性审计。