菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - RAZOR: Ratio-Aware Layer Editing for Targeted Unlearning in Vision Transformers and Diffusion Models

Transformer based diffusion and vision-language models have achieved remarkable success; yet, efficiently removing undesirable or sensitive information without retraining remains a central challenge for model safety and compliance. We introduce Ratio-Aware Zero/One-step Optimized Retentive unlearning (RAZOR), a lightweight, model-agnostic unlearning framework that generalizes forgetting updates to coordinated multi-layer and multi-head edits within transformer backbones. RAZOR identifies the most important layers and attention heads by measuring how much they contribute to forgetting the target data while preserving useful knowledge. Then, it updates these parts of the model using a carefully regularized rule to avoid harming overall performance. The set of edited components grows gradually, ensuring precise unlearning without over-editing or damaging unrelated capabilities. We evaluate RAZOR on CLIP, Stable Diffusion, and vision-language models (VLMs) using widely adopted unlearning benchmarks covering identity, style, and object erasure tasks. Our results show that RAZOR achieves highly accurate and stable forgetting, even under quantization. This approach offers stronger retention and better efficiency than prior methods. Notably, it also operates significant faster than conventional techniques. These results demonstrate that RAZOR is a practical and scalable solution for safe, adaptive unlearning in transformer-based vision models.

顶级标签: model training computer vision multi-modal
详细标签: model unlearning vision transformers diffusion models attention editing safety 或 搜索:

RAZOR:面向视觉Transformer与扩散模型定向遗忘的比率感知层编辑方法 / RAZOR: Ratio-Aware Layer Editing for Targeted Unlearning in Vision Transformers and Diffusion Models


1️⃣ 一句话总结

这篇论文提出了一种名为RAZOR的轻量级通用方法,通过智能识别并精准编辑Transformer模型中对特定信息最关键的层和注意力头,从而高效、安全地移除模型中的敏感或不良内容,同时保持其整体性能。

源自 arXiv: 2603.14819