使用影响函数检测标注偏见 / Detecting labeling bias using influence functions
1️⃣ 一句话总结
这篇论文提出了一种利用影响函数来检测数据集中因人为疏忽或资源限制导致的错误标签的方法,并在图像识别和医疗影像数据集上验证了其有效性,能成功识别出大部分标注错误的样本。
Labeling bias arises during data collection due to resource limitations or unconscious bias, leading to unequal label error rates across subgroups or misrepresentation of subgroup prevalence. Most fairness constraints assume training labels reflect the true distribution, rendering them ineffective when labeling bias is present; leaving a challenging question, that \textit{how can we detect such labeling bias?} In this work, we investigate whether influence functions can be used to detect labeling bias. Influence functions estimate how much each training sample affects a model's predictions by leveraging the gradient and Hessian of the loss function -- when labeling errors occur, influence functions can identify wrongly labeled samples in the training set, revealing the underlying failure mode. We develop a sample valuation pipeline and test it first on the MNIST dataset, then scaled to the more complex CheXpert medical imaging dataset. To examine label noise, we introduced controlled errors by flipping 20\% of the labels for one class in the dataset. Using a diagonal Hessian approximation, we demonstrated promising results, successfully detecting nearly 90\% of mislabeled samples in MNIST. On CheXpert, mislabeled samples consistently exhibit higher influence scores. These results highlight the potential of influence functions for identifying label errors.
使用影响函数检测标注偏见 / Detecting labeling bias using influence functions
这篇论文提出了一种利用影响函数来检测数据集中因人为疏忽或资源限制导致的错误标签的方法,并在图像识别和医疗影像数据集上验证了其有效性,能成功识别出大部分标注错误的样本。
源自 arXiv: 2602.19130