探索SAIG方法以实现对可解释人工智能的客观评估 / Exploring SAIG Methods for an Objective Evaluation of XAI
1️⃣ 一句话总结
这篇论文首次系统性地回顾和分析了用于评估可解释人工智能方法的‘合成人工智能基准’技术,通过提出一个新的分类体系揭示了该领域缺乏共识的现状,并强调了未来进行标准化研究的必要性。
The evaluation of eXplainable Artificial Intelligence (XAI) methods is a rapidly growing field, characterized by a wide variety of approaches. This diversity highlights the complexity of the XAI evaluation, which, unlike traditional AI assessment, lacks a universally correct ground truth for the explanation, making objective evaluation challenging. One promising direction to address this issue involves the use of what we term Synthetic Artificial Intelligence Ground truth (SAIG) methods, which generate artificial ground truths to enable the direct evaluation of XAI techniques. This paper presents the first review and analysis of SAIG methods. We introduce a novel taxonomy to classify these approaches, identifying seven key features that distinguish different SAIG methods. Our comparative study reveals a concerning lack of consensus on the most effective XAI evaluation techniques, underscoring the need for further research and standardization in this area.
探索SAIG方法以实现对可解释人工智能的客观评估 / Exploring SAIG Methods for an Objective Evaluation of XAI
这篇论文首次系统性地回顾和分析了用于评估可解释人工智能方法的‘合成人工智能基准’技术,通过提出一个新的分类体系揭示了该领域缺乏共识的现状,并强调了未来进行标准化研究的必要性。
源自 arXiv: 2602.08715