SurrogateSHAP:一种无需重新训练的文本到图像模型贡献者归属方法 / SurrogateSHAP: Training-Free Contributor Attribution for Text-to-Image (T2I) Models
1️⃣ 一句话总结
这篇论文提出了一种名为SurrogateSHAP的新方法,它能够高效、无需重新训练地评估和量化不同数据贡献者对文本生成图像模型性能的影响,从而为公平的数据补偿和模型审计提供支持。
As Text-to-Image (T2I) diffusion models are increasingly used in real-world creative workflows, a principled framework for valuing contributors who provide a collection of data is essential for fair compensation and sustainable data marketplaces. While the Shapley value offers a theoretically grounded approach to attribution, it faces a dual computational bottleneck: (i) the prohibitive cost of exhaustive model retraining for each sampled subset of players (i.e., data contributors) and (ii) the combinatorial number of subsets needed to estimate marginal contributions due to contributor interactions. To this end, we propose SurrogateSHAP, a retraining-free framework that approximates the expensive retraining game through inference from a pretrained model. To further improve efficiency, we employ a gradient-boosted tree to approximate the utility function and derive Shapley values analytically from the tree-based model. We evaluate SurrogateSHAP across three diverse attribution tasks: (i) image quality for DDPM-CFG on CIFAR-20, (ii) aesthetics for Stable Diffusion on Post-Impressionist artworks, and (iii) product diversity for FLUX.1 on Fashion-Product data. Across settings, SurrogateSHAP outperforms prior methods while substantially reducing computational overhead, consistently identifying influential contributors across multiple utility metrics. Finally, we demonstrate that SurrogateSHAP effectively localizes data sources responsible for spurious correlations in clinical images, providing a scalable path toward auditing safety-critical generative models.
SurrogateSHAP:一种无需重新训练的文本到图像模型贡献者归属方法 / SurrogateSHAP: Training-Free Contributor Attribution for Text-to-Image (T2I) Models
这篇论文提出了一种名为SurrogateSHAP的新方法,它能够高效、无需重新训练地评估和量化不同数据贡献者对文本生成图像模型性能的影响,从而为公平的数据补偿和模型审计提供支持。
源自 arXiv: 2601.22276