菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-12
📄 Abstract - BackdoorIDS: Zero-shot Backdoor Detection for Pretrained Vision Encoder

Self-supervised and multimodal vision encoders learn strong visual representations that are widely adopted in downstream vision tasks and large vision-language models (LVLMs). However, downstream users often rely on third-party pretrained encoders with uncertain provenance, exposing them to backdoor attacks. In this work, we propose BackdoorIDS, a simple yet effective zero-shot, inference-time backdoor samples detection method for pretrained vision encoders. BackdoorIDS is motivated by two observations: Attention Hijacking and Restoration. Under progressive input masking, a backdoored image initially concentrates attention on malicious trigger features. Once the masking ratio exceeds the trigger's robustness threshold, the trigger is deactivated, and attention rapidly shifts to benign content. This transition induces a pronounced change in the image embedding, whereas embeddings of clean images evolve more smoothly across masking progress. BackdoorIDS operationalizes this signal by extracting an embedding sequence along the masking trajectory and applying density-based clustering such as DBSCAN. An input is flagged as backdoored if its embedding sequence forms more than one cluster. Extensive experiments show that BackdoorIDS consistently outperforms existing defenses across diverse attack types, datasets, and model families. Notably, it is a plug-and-play approach that requires no retraining and operates fully zero-shot at inference time, making it compatible with a wide range of encoder architectures, including CNNs, ViTs, CLIP, and LLaVA-1.5.

顶级标签: computer vision model evaluation systems
详细标签: backdoor detection zero-shot vision encoders security adversarial robustness 或 搜索:

BackdoorIDS:针对预训练视觉编码器的零样本后门检测 / BackdoorIDS: Zero-shot Backdoor Detection for Pretrained Vision Encoder


1️⃣ 一句话总结

这篇论文提出了一种名为BackdoorIDS的零样本检测方法,它通过观察图像在逐步遮盖过程中注意力特征的突变来有效识别预训练视觉编码器中的后门攻击样本,无需重新训练模型即可即插即用。

源自 arXiv: 2603.11664