通过反事实数据集生成分析神经网络预测的公平性 / Analyzing Fairness of Neural Network Prediction via Counterfactual Dataset Generation
1️⃣ 一句话总结
这篇论文提出了一种新方法,通过巧妙地修改少量训练数据的标签来生成一个“反事实”数据集,然后重新训练模型,以此检测和解释神经网络预测结果是否依赖于训练数据中的偏见标签,从而评估模型的公平性。
Interpreting the inference-time behavior of deep neural networks remains a challenging problem. Existing approaches to counterfactual explanation typically ask: What is the closest alternative input that would alter the model's prediction in a desired way? In contrast, we explore counterfactual datasets. Rather than perturbing the input, our method efficiently finds the closest alternative training dataset, one that differs from the original dataset by changing a few labels. Training a new model on this altered dataset can then lead to a different prediction of a given test instance. This perspective provides a new way to assess fairness by directly analyzing the influence of label bias on training and inference. Our approach can be characterized as probing whether a given prediction depends on biased labels. Since exhaustively enumerating all possible alternate datasets is infeasible, we develop analysis techniques that trace how bias in the training data may propagate through the learning algorithm to the trained network. Our method heuristically ranks and modifies the labels of a bounded number of training examples to construct a counterfactual dataset, retrains the model, and checks whether its prediction on a chosen test case changes. We evaluate our approach on feedforward neural networks across over 1100 test cases from 7 widely-used fairness datasets. Results show that it modifies only a small subset of training labels, highlighting its ability to pinpoint the critical training examples that drive prediction changes. Finally, we demonstrate how our counterfactual datasets reveal connections between training examples and test cases, offering an interpretable way to probe dataset bias.
通过反事实数据集生成分析神经网络预测的公平性 / Analyzing Fairness of Neural Network Prediction via Counterfactual Dataset Generation
这篇论文提出了一种新方法,通过巧妙地修改少量训练数据的标签来生成一个“反事实”数据集,然后重新训练模型,以此检测和解释神经网络预测结果是否依赖于训练数据中的偏见标签,从而评估模型的公平性。
源自 arXiv: 2602.10457