菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-23
📄 Abstract - Benchmarking Unlearning for Vision Transformers

Research in machine unlearning (MU) has gained strong momentum: MU is now widely regarded as a critical capability for building safe and fair AI. In parallel, research into transformer architectures for computer vision tasks has been highly successful: Increasingly, Vision Transformers (VTs) emerge as strong alternatives to CNNs. Yet, MU research for vision tasks has largely centered on CNNs, not VTs. While benchmarking MU efforts have addressed LLMs, diffusion models, and CNNs, none exist for VTs. This work is the first to attempt this, benchmarking MU algorithm performance in different VT families (ViT and Swin-T) and at different capacities. The work employs (i) different datasets, selected to assess the impacts of dataset scale and complexity; (ii) different MU algorithms, selected to represent fundamentally different approaches for MU; and (iii) both single-shot and continual unlearning protocols. Additionally, it focuses on benchmarking MU algorithms that leverage training data memorization, since leveraging memorization has been recently discovered to significantly improve the performance of previously SOTA algorithms. En route, the work characterizes how VTs memorize training data relative to CNNs, and assesses the impact of different memorization proxies on performance. The benchmark uses unified evaluation metrics that capture two complementary notions of forget quality along with accuracy on unseen (test) data and on retained data. Overall, this work offers a benchmarking basis, enabling reproducible, fair, and comprehensive comparisons of existing (and future) MU algorithms on VTs. And, for the first time, it sheds light on how well existing algorithms work in VT settings, establishing a promising reference performance baseline.

顶级标签: computer vision model training model evaluation
详细标签: machine unlearning vision transformers benchmark memorization forgetting quality 或 搜索:

视觉Transformer的遗忘能力基准测试 / Benchmarking Unlearning for Vision Transformers


1️⃣ 一句话总结

这篇论文首次为视觉Transformer建立了机器遗忘能力的基准测试框架,通过系统评估不同算法在不同模型和数据上的表现,揭示了视觉Transformer的记忆特性,并为未来开发更安全、公平的AI提供了可复现的评估基础。

源自 arXiv: 2602.20114