针对对抗扰动的低秩适配 / Low Rank Adaptation for Adversarial Perturbation
1️⃣ 一句话总结
本文发现对抗扰动与模型参数更新类似,具有天然的低秩结构,并利用这一特性设计了一种两步法(先利用参考模型和辅助数据构建低维梯度投影空间,再在该空间内进行黑盒攻击搜索),大幅提升黑盒对抗攻击的效率和成功率。
Low-Rank Adaptation (LoRA), which leverages the insight that model updates typically reside in a low-dimensional space, has significantly improved the training efficiency of Large Language Models (LLMs) by updating neural network layers using low-rank matrices. Since the generation of adversarial examples is an optimization process analogous to model training, this naturally raises the question: Do adversarial perturbations exhibit a similar low-rank structure? In this paper, we provide both theoretical analysis and extensive empirical investigation across various attack methods, model architectures, and datasets to show that adversarial perturbations indeed possess an inherently low-rank structure. This insight opens up new opportunities for improving both adversarial attacks and defenses. We mainly focus on leveraging this low-rank property to improve the efficiency and effectiveness of black-box adversarial attacks, which often suffer from excessive query requirements. Our method follows a two-step approach. First, we use a reference model and auxiliary data to guide the projection of gradients into a low-dimensional subspace. Next, we confine the perturbation search in black-box attacks to this low-rank subspace, significantly improving the efficiency and effectiveness of the adversarial attacks. We evaluated our approach across a range of attack methods, benchmark models, datasets, and threat models. The results demonstrate substantial and consistent improvements in the performance of our low-rank adversarial attacks compared to conventional methods.
针对对抗扰动的低秩适配 / Low Rank Adaptation for Adversarial Perturbation
本文发现对抗扰动与模型参数更新类似,具有天然的低秩结构,并利用这一特性设计了一种两步法(先利用参考模型和辅助数据构建低维梯度投影空间,再在该空间内进行黑盒攻击搜索),大幅提升黑盒对抗攻击的效率和成功率。
源自 arXiv: 2604.27487