菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-09
📄 Abstract - U-CECE: A Universal Multi-Resolution Framework for Conceptual Counterfactual Explanations

As AI models grow more complex, explainability is essential for building trust, yet concept-based counterfactual methods still face a trade-off between expressivity and efficiency. Representing underlying concepts as atomic sets is fast but misses relational context, whereas full graph representations are more faithful but require solving the NP-hard Graph Edit Distance (GED) problem. We propose U-CECE, a unified, model-agnostic multi-resolution framework for conceptual counterfactual explanations that adapts to data regime and compute budget. U-CECE spans three levels of expressivity: atomic concepts for broad explanations, relational sets-of-sets for simple interactions, and structural graphs for full semantic structure. At the structural level, both a precision-oriented transductive mode based on supervised Graph Neural Networks (GNNs) and a scalable inductive mode based on unsupervised graph autoencoders (GAEs) are supported. Experiments on the structurally divergent CUB and Visual Genome datasets characterize the efficiency-expressivity trade-off across levels, while human surveys and LVLM-based evaluation show that the retrieved structural counterfactuals are semantically equivalent to, and often preferred over, exact GED-based ground-truth explanations.

顶级标签: machine learning model evaluation theory
详细标签: explainable ai counterfactual explanations graph neural networks concept-based reasoning multi-resolution framework 或 搜索:

U-CECE:一个用于概念反事实解释的通用多分辨率框架 / U-CECE: A Universal Multi-Resolution Framework for Conceptual Counterfactual Explanations


1️⃣ 一句话总结

这篇论文提出了一个名为U-CECE的通用框架,它通过提供从简单概念到复杂图结构的不同详细程度的解释,灵活地解决了AI模型解释方法在表达能力和计算效率之间的权衡问题。

源自 arXiv: 2604.08295