菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-16
📄 Abstract - Concept Influence: Leveraging Interpretability to Improve Performance and Efficiency in Training Data Attribution

As large language models are increasingly trained and fine-tuned, practitioners need methods to identify which training data drive specific behaviors, particularly unintended ones. Training Data Attribution (TDA) methods address this by estimating datapoint influence. Existing approaches like influence functions are both computationally expensive and attribute based on single test examples, which can bias results toward syntactic rather than semantic similarity. To address these issues of scalability and influence to abstract behavior, we leverage interpretable structures within the model during the attribution. First, we introduce Concept Influence which attribute model behavior to semantic directions (such as linear probes or sparse autoencoder features) rather than individual test examples. Second, we show that simple probe-based attribution methods are first-order approximations of Concept Influence that achieve comparable performance while being over an order-of-magnitude faster. We empirically validate Concept Influence and approximations across emergent misalignment benchmarks and real post-training datasets, and demonstrate they achieve comparable performance to classical influence functions while being substantially more scalable. More broadly, we show that incorporating interpretable structure within traditional TDA pipelines can enable more scalable, explainable, and better control of model behavior through data.

顶级标签: llm model training model evaluation
详细标签: training data attribution interpretability influence functions concept-based attribution scalable methods 或 搜索:

概念影响力:利用可解释性提升训练数据归因的性能与效率 / Concept Influence: Leveraging Interpretability to Improve Performance and Efficiency in Training Data Attribution


1️⃣ 一句话总结

这篇论文提出了一种名为‘概念影响力’的新方法,它通过分析模型内部可解释的语义概念(而非单个测试样本)来追溯训练数据对模型行为的影响,从而在保持准确性的同时,大幅提升了归因分析的效率和可扩展性。

源自 arXiv: 2602.14869