菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-15
📄 Abstract - LIBERTy: A Causal Framework for Benchmarking Concept-Based Explanations of LLMs with Structural Counterfactuals

Concept-based explanations quantify how high-level concepts (e.g., gender or experience) influence model behavior, which is crucial for decision-makers in high-stakes domains. Recent work evaluates the faithfulness of such explanations by comparing them to reference causal effects estimated from counterfactuals. In practice, existing benchmarks rely on costly human-written counterfactuals that serve as an imperfect proxy. To address this, we introduce a framework for constructing datasets containing structural counterfactual pairs: LIBERTy (LLM-based Interventional Benchmark for Explainability with Reference Targets). LIBERTy is grounded in explicitly defined Structured Causal Models (SCMs) of the text generation, interventions on a concept propagate through the SCM until an LLM generates the counterfactual. We introduce three datasets (disease detection, CV screening, and workplace violence prediction) together with a new evaluation metric, order-faithfulness. Using them, we evaluate a wide range of methods across five models and identify substantial headroom for improving concept-based explanations. LIBERTy also enables systematic analysis of model sensitivity to interventions: we find that proprietary LLMs show markedly reduced sensitivity to demographic concepts, likely due to post-training mitigation. Overall, LIBERTy provides a much-needed benchmark for developing faithful explainability methods.

顶级标签: llm model evaluation theory
详细标签: concept-based explanations causal inference benchmarking structural counterfactuals explainable ai 或 搜索:

LIBERTy:一个基于结构反事实的LLM概念解释评估因果框架 / LIBERTy: A Causal Framework for Benchmarking Concept-Based Explanations of LLMs with Structural Counterfactuals


1️⃣ 一句话总结

这篇论文提出了一个名为LIBERTy的新框架,它通过构建基于明确因果模型的结构化反事实数据集,来系统评估大语言模型中概念解释方法的可靠性和有效性,发现现有方法仍有很大改进空间,并且商业大模型对某些人口统计概念的敏感性因后期调整而降低。

源自 arXiv: 2601.10700