评估针对不完整输入的反事实解释方法 / Evaluating Counterfactual Explanation Methods on Incomplete Inputs
1️⃣ 一句话总结
这篇论文通过系统评估发现,现有为机器学习模型生成反事实解释的方法在输入数据存在缺失值时普遍表现不佳,即使鲁棒性较强的方法也难以为不完整输入找到有效的解释,因此亟需开发能专门处理缺失数据的新方法。
Existing algorithms for generating Counterfactual Explanations (CXs) for Machine Learning (ML) typically assume fully specified inputs. However, real-world data often contains missing values, and the impact of these incomplete inputs on the performance of existing CX methods remains unexplored. To address this gap, we systematically evaluate recent CX generation methods on their ability to provide valid and plausible counterfactuals when inputs are incomplete. As part of this investigation, we hypothesize that robust CX generation methods will be better suited to address the challenge of providing valid and plausible counterfactuals when inputs are incomplete. Our findings reveal that while robust CX methods achieve higher validity than non-robust ones, all methods struggle to find valid counterfactuals. These results motivate the need for new CX methods capable of handling incomplete inputs.
评估针对不完整输入的反事实解释方法 / Evaluating Counterfactual Explanation Methods on Incomplete Inputs
这篇论文通过系统评估发现,现有为机器学习模型生成反事实解释的方法在输入数据存在缺失值时普遍表现不佳,即使鲁棒性较强的方法也难以为不完整输入找到有效的解释,因此亟需开发能专门处理缺失数据的新方法。
源自 arXiv: 2604.08004